Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Remote Data Access Solutions? 54

magoldfish asks: "Our company has several terabytes of data, typically chunked into 2-6 GB files, that we need to share among users at sites around the US. Our users range in technical skill from novice to guru, data access needs to be secure, and we require client access from multiple platforms (Linux, Mac, and Windows). These files will likely be accessed infrequently, and cached on local hard drives by our power users. We've been running WebDAV, primarily to accommodate non-savvy users and guarantee access from firewalled environments, but the users are really complaining about download speed — maybe 50 KB/sec serving from a Mac. Any suggestions for better alternatives, or comments on the pros and cons of alternative access techniques?"
This discussion has been archived. No new comments can be posted.

Remote Data Access Solutions?

Comments Filter:
  • by LiquidCoooled ( 634315 ) on Thursday November 09, 2006 @01:37PM (#16787753) Homepage Journal
    Save bandwidth, time and support headaches.
    • by msobkow ( 48369 )

      While there is no denying the average bandwidth of a box of DVDs, there are alternatives to addressing the download speed. The 50Kb/sec download speed from their server is horrible no matter how you slice it -- far too many corporations serve up well over 100Kb/sec per client.

      The problem is that high capacity solutions are not cheap, and I get the impression that there must not be the budget for those options, or no one would have deployed a 50Kb/sec server in the first place.

      Aside from DVDs, you coul

  • by kperrier ( 115199 ) on Thursday November 09, 2006 @01:38PM (#16787769)
    Without knowing how the data will be used it will be hard for anyone to supply you with some recommendations.
  • Seems perfectly obvious to me. VPN's between sites and you access data as if it was on the local network. What am I missing that makes this not an option, or for that matter why isn't this the way it's done already?
  • If the data can be broken down into smaller more logical chunks, put it in a database and let your users get the data they require.

    make sure you use more than a 24bit key though ;)
  • One word: (Score:5, Informative)

    by Pig Hogger ( 10379 ) <pig.hogger@gmail ... m minus caffeine> on Thursday November 09, 2006 @01:40PM (#16787791) Journal
    rsync [google.ca].

    (As always, Google is your friend).

  • by mac123 ( 25118 ) on Thursday November 09, 2006 @01:41PM (#16787801)
    Sounds like a job for Wide Area File Services (WAFS).

    Here's Cisco's version: WAFS [cisco.com]

  • Secure? (Score:2, Interesting)

    by ReidMaynard ( 161608 )
    Do you really mean encrypted? If so, what's wrong with https ?
  • SQL servers, with access only via SSL tunnel. That means the access will be both convenient and secure.
  • if you want longterm i think there are BR-RWs (Blur-Ray ReWrittable) and since they hold 50gb a piece, this might be better
    (This is just my view tho)
  • Profiling? (Score:4, Informative)

    by barnetda ( 42894 ) on Thursday November 09, 2006 @01:50PM (#16787867)
    I'm no network / data access guru, but this seems like a typical case of profile first, optimize later.

    The idea is simple. Don't just go in and change stuff, first measure the pieces under typical load. Look where the bottle-neck is, address it, and move to the next bottle-neck. Repeat as often as needed.

    Are you disk I/O bound? Buy faster disk / better controllers / spread the load over more machines / .....

    Are you CPU bound? Is the CPU on your server spending so much time with I/O requests, that it has no cycles available to address additional requests? Buy more / faster / better CPUs.

    Are you network bound? Which piece of the network is the hold-up? Your switch? Get a better / faster one. Your ISP? Get a fatter pipe.

    Have you optimized all of these? What about setting up remote servers that are updated hourly/daily/weekly/whatever so the machine is close to the user network-wise for faster download speeds.

    Some of the above adds complexity. Are you equipped to handle that complexity? Can you become equipped to handle it? If not, re-consider your options.

    Hope this helps.

    Cheers,
    Dave
    • The questioner doesn't specifically have a data transfer problem, but instead a wide-area information processing problem (of which data transfer may be a part).

      While the answer may reside with any of the main themes recommended by responders (improving transfer, reducing the amount of data to transfer, and eliminating the need to transfer via remote desktop solutions such as Citrix, MS Terminal Services, and VNC), the questioner really needs to define his needs. Does the data really need to be local at e

  • Seding out copies on physical media becomes a monumental risk if you're dealing with information that is confidential, especially if that data is covered by Sarbanes-Oxley or HIPAA. Remote sessions either by VPN, Apple Remote Desktop (the author said it's a Mac server, right? MS Remote Desktop if not) would side-step the distrobution problem, plus the DBA's can lock down access via ACL so users only get at the data they need/are allowed.
  • hmm, probably not a great idea to move to something unstable such as that. Problem this guy has is remote users. He probably has a lot and trying to serve down Gigs of data is tough. The biggest problem is the location of this data and how fast the internet lines are. They need to have large upload capacities. So the bigger the data files, the longer it takes for the remote users to pull down. Security is great, but the problem with any secure connection is overheard, granted its not a lot, but still
  • by Gunfighter ( 1944 ) on Thursday November 09, 2006 @02:07PM (#16787981)
    If possible, write the app to run centrally and then use a remote desktop solution like LTSP, Citrix, or Windows Terminal Services to feed access to the app out to clients.
    • by Degrees ( 220395 )
      Agreed - why move the data to the users, to when you can bring the users to the data?
      • Because that puts the load on the server, not on the client. If you have 10 users that are all analyzing 6 GB files and converting their contents to another format over Citrix, that server is dead.

        It's a lot cheaper to have people download the files from a $4,000 server and crunch them locally than have them connect to a $50,000 server and crunch them remotely at 1/10th the speed.
        • Depends on the application needs. Citrix servers can operate in load balancing farms, so you don't need one big server when a bunch of little ones will do. Basically, you're taking the computing horsepower off the desktop and loading it up on a server farm. If the computing is that intensive to where you have many people crunching many large files, they may need to re-evaluate their data collection and storage methods to better suit the intended end result.

          To determine if a remote desktop solution is the be
        • how much are you paying the people?

          who says the 50k server has to be slower?
  • into the same office and upgrade everyone (including servers) to gigabit.
  • by jp10558 ( 748604 ) on Thursday November 09, 2006 @02:20PM (#16788071)
    I would expect you could use something like VNC or remote X Sessions over ssh/vpn (Hamachi, OpenVPN, etc) and keep the data local.

    Or, if you need it spread out for some reason, iFolder or rSync seem the best choices. However, you could also look at AFS.

    Basically, you have to get the long haul data transfers down somehow, or else get faster connections.
  • "Never underestimate the bandwidth of a station wagon full of tapes."

    Of course, the latency kind of sucks, but that doesn't seem to affect your requirements. And, these days, you're just as likely to pop it in a FedEx canister, and they don't use station wagons. But the saying still holds...
  • More of a question (Score:3, Insightful)

    by east coast ( 590680 ) on Thursday November 09, 2006 @02:42PM (#16788211)
    These files will likely be accessed infrequently, and cached on local hard drives by our power users.

    But how often do these files need updated? Is the end user in a read only situation? How infrequent is infrequent? How many users are you talking about and what's the density of these users? Even though the access is "infrequent" is this an access that modifies data that would have to be shared across your entire user base?

    Your scenario needs some gaps filled in as far as the requirements. I see a lot of people suggesting large capacity medias being shipped to the user but if this data is being updated frequently this is not a solution. If you have a large number of users and the data is not updated often you would still have to judge the frequency of sending out updates to X number of users versus the costs of having the data centralized and the infrastructure upgraded to meet the needs of these users. If you need to share changes in this data from the user end then using a physical media via postal services is going to cause problems in the coordination of the correct version of these files to be used by the other end users. and god forbid you have several users who need to update this data in a short timeframe as you're going to have disks being mailed in by users not knowing what changes were made by other users. You'd doubtlessly fall into some kind of data integrity problem and even if your talking only having users update their data every few months your still going to need to hire someone on to coordinate these efforts and insure that all the end users are getting every update in a timely fashion.

    Without more information it's hard to suggest something and still be confident that we're not leading you down to a solution that is completely inappropriate.
  • by jnik ( 1733 ) on Thursday November 09, 2006 @02:43PM (#16788215)
    Your data are in 2-6GB chunks. When a user needs a chunk, do they really need the whole chunk, or just a few megabytes of it? Downloading 6GB when you need 6MB really sucks. The solutions mentioned here all work by breaking down the chunk size. Thin client reduces the data transfer to what you need to see on the screen right then. Putting in a filesystem layer allows transfer of bits of files. Using a SQL database reduces it all to queries and the records you actually need to hit. (An aside: I've had a friend beat me over the head with "The single most expensive thing your program can do is open that database connection. Once you have it, USE IT. Make the DBM on the big beefy server do the work...they've optimized the heck out of it.")

    Figure out how the data can be broken into smaller chunks and managed...that will probably indicate what sort of tech will enable things for you.

  • Depends on what you're doing with the data. Are you reading it? Writing it? Updating it? Are only some people updating it? Do they need real-time access to the database or can it be a version system?

    What you need to do depends a LOT on these things above the size of the data. If it's TBs of customer data, you probably want that somewhere secure and centralized, with stored procedures to query it and return subsets to your users. If it's not private data, why not let Google crawl and cache it and get
  • go with Novell (Score:1, Informative)

    by Anonymous Coward
    We run NetWare 6.5 (moving over to Suse Open Enterprise Servers). We use web access component of NetStorage. From Novell:"NetStorage provides simple Internet browser-based access to file storage on a Novell network. Users have secure file access from any Internet location with nothing to download or install on the user workstation. Through a browser interface, users can also access file properties and have the options of restoring recent versions and managing rights to files and folders. NetStorage lets use
    • by pugdk ( 697845 )
      Correction: You *thought* you were going to move to SUSE servers. You will in reality be moving to M$ servers, hidden behind the SUSE name. So yeah, you will have *really* good interoperability with M$ products... =)
  • Caymas systems (http://www.caymas.com) has a box that will allow a) simple and b) secure access. The speed is good, but bandwidth can be constricted at any point between you and the end user. For the record, I used to work at Caymas.
  • AOL? (Score:2, Funny)

    by zcubed ( 916242 )
    Give them a call about sharing large files...oh, wait, never mind.
  • If you're trying to send this data out to several places at once, bit torrent might be a good solution. At the very least it will reduce the ammount that needs to be pulled directly from the central source.
  • This sounds like a reasonable use for Amazon's Simple Storage Service (S3). See http://aws.amazon.com/s3 [amazon.com] for more info, but it's a web service data storage solution that charges for usage ($0.15/GB/month for storage + $0.20/GB for transfer) that's redundant and scalable and allows you to store an "unlimitted" amount of data. You can take advantage of Amazon's infrastructure and avoid needing to hire people to maintain a fleet of storage servers.
  • Citrix.

    The client works on Mac, Linux and Windows, can be installed from and runs in a web browser, you need only enough bandwidth as a VNC connection and if your connection is interrupted for whatever reason, it will save your session state without borking whatever application you happened to be working in.
  • Warez (Score:1, Funny)

    by Anonymous Coward
    Sounds like he is sharing warez!!
  • A possible approach that is fairly transparent is to use a Distributed Storage Filesystem.

    Have a look at this article: http://www.linuxplanet.com/linuxplanet/reports/436 1/1/ [linuxplanet.com] then choose amongst the more mature projects: Coda http://coda.cs.cmu.edu/ [cmu.edu] and OpenAFS http://www.openafs.org/ [openafs.org]. Intermezzo looked promising but hasn't been updated in a long while so it's probably dead.

    Hope this helps.

  • The 50k/s is going to be a limit of their connection. There's not much you're going to be able to do to improve the situation for that individual aside from (1) smaller chunks and (2) compression.
    A lot of people here have mentioned breaking up your data into smaller chunks, which is valid and first priority.
    Have you also considered serving-up a compressed version of the data, say using a .gz'd version of the data file on your server with the http/1.1 header "Content-Transfer-Encoding: gzip"? There's probabl
  • Setup a Microsoft Server 2003, Enterprise Edition then install Terminal Services on it. You can have the remote sites hook into the terminal server making the session be local to where the data is found. I do know there are clients for Windows and Mac readily available. As for Unix, setting up VNC on a couple of XP Pro workstations might be the best workaround for accessing the data at the local/HQ site.

    Curious... just how many people are we talking about that need access to the data?
  • Yeah, it depends on the needs, but Terminal Server sounds like a good idea to me. I get tired of waiting for a 20MB file over VPN, I can't imagine waiting for a GB sized file...
  • For me it looks like you are dealing with some kind of media data (movies?). If you are constrained by file format and unable/unwilling to split those in smaller parts use local cache servers.

    In each location provide small caching server that will rsync periodically to the main data source. Then tell users in each location to use the local server.

  • I work for a company who's primary product line is a remote access strategy. http://remoteworkplace.com/ [remoteworkplace.com]

    The method you seem to want to follow seems to make for a large amount of redundant data as well as being bandwidth consuming.

    The package we provide allows user to securely log into a terminal server located at your main office (sometimes hosted by my company) and access a full desk top with nothing more then a web browser and Java installed on the computer.

    This system is ideal as it removes the nee

  • Citrix would provide you with a place for multiple users to co-exist on. Yes, there also Linux and Mac clients for it that work just fine (incl printing and the ability to do file transfers).
    Unfortunately, implementing Citrix can be a bit pricey especially if the apps your users will run are 'heavy' on resources - just means you'd have to build a bigger, meaner server ($$$!). Plus it will cost you recurring yearly licensing fees... But for accessibility for remote users, you'll be hard pressed to find a m
  • This is a perfect job for AFS: - Mature, well-known clients available for all platforms. - You can control how much you cache on the local disk. - Control access through Kerberos, which can be transparently done on Windows (use your Active Directory server as the domain controller). - Built with global, distributed file systems in mind. - Already scales to many terrabytes at many different sites - not some half-cocked idea that someone has posted on a webpage, but never implemented.
  • We do this between a few sites. You didn't give much detail into how many users and how many sites, or into the file access method (CIFS, FTP, etc...?), but we do site to site VPNs between locations, and locate RiverBed boxes at each site. Depending on the site and number of users at each branch you could pick a box that works for you. We like their boxes because they are auto-sensing of other RiverBed boxes, very easy to do any additional configuration on, and have wonderful reporting.
  • Might check out Hamachi [hamachi.cc] which is a zero config vpn. I discovered this from a friend, and been very happy to keep my desktop, server, and laptop info available, versions for windows and linux.
  • ok, so what are your users doing with the data? are they updating it, is it purely read-only?

    If it's read-only, and it's statistics, why dont you implement an analysis system instead. Find out what the users are reading from this data, break-it up into logical truth tables and, if possible, implement and update an OLAP cube. Allow people to remote admin/citrix/terminal services into the data via a medium like Crystal reports, analysis services and excel, um um many other pieces of software...
  • AFS has a bit (okay, a big) learning curve, but it's a great free wide are file system.

God made the integers; all else is the work of Man. -- Kronecker

Working...