Is BitTorrent Search Harmful? 136
protee writes "p2pnet published a report arguing that the robustness of BitTorrent to free-riding might have been more related to the lack of meta-data search rather than to its tit-for-tat-like strategy. The question now is: how the release of such search engines is going to impact the BitTorrent network?"
What difference does it make? (Score:5, Informative)
There is no "BitTorrent Network" (Score:5, Informative)
Not sure I buy the analysis (Score:5, Informative)
I don't see it. If you're going to leech, that's the way to do it, but cooperating overall results in even better upload rates; you're not fighting for the few slots afforded newcomers, you will be given as many packets as you can eat as fast as you can eat them so long as you reciprocate. And I'm sure those communities will survive - I suspect that Bram will have thought of how to integrate search with community.
Blocked already (Score:4, Informative)
Re:Funny search (Score:2, Informative)
Sweden is a small country in the north of europe.
RIAA-imperial navy can stay the fsck out of sweden, thank you very much.
Re:Blocked already (Score:1, Informative)
That just shows that the effectiveness of blocking search parameters to limit search results is minimal, unless you take the baby out with the bath water.
Re:Blocked already (Score:3, Informative)
So, do an advanced search for "tiger" in the applications category,
and guess what? The torrent files are still there, and still downloadable.
Re:Serious Question (Score:3, Informative)
Re:Isn't the principle of Bittorrent... (Score:1, Informative)
By the 2nd (or was it 3rd) release bandwidth limiting became available - but still only at the command-line and only per torrent - no ability to have 3 torrents running with total UL choked at 20K. Instead you would need to limit one to 6K and the other 2 to 7K for example.
At any rate, limiting bandwith at the command-line by running a py script with the required command-line switches to limit the bandwith is not something a newbie would be doing anyway.
AFAIK, it wasn't until Azureus came out that you could limit the bandwidth globally for torrents instead of a strict rate per torrent. I always used the stand-alone limiting program "NetLimiter" before.
Re:Just wait.... (Score:2, Informative)
Re:bittorrent works...edonkey is slow (Score:3, Informative)
---from the article--
Once a client has obtained a list of other peers, it will contact them to try to fetch the data it is looking for. In BitTorrent, file contents is split into small-sized pieces and each client maintains the list of the pieces it holds. After a handshake, peers exchange their piece lists so that each of them may determine whether the other has some pieces they are interested in obtaining.
The bandwidth being a limited resource, a single client cannot serve every peer interested in pieces it holds at the same time. The maximum number of peers served concurrently (i.e. the number of available slots) is configurable by the user. All other peers connected to a client (whether they are interested or not) which are not being served are said to choked. In consequence, each client implements an algorithm to choose which peers to choke and un-choke among those connected to him over time. The strategy proposed by BitTorrent is named "tit-for-tat", meaning that a client will preferably cooperate with the peers cooperating with him. Practically, this means that each client measures how fast it can download from each peer and, in turn, will serve those from whom it has the better download rates. This strategy is implemented for all but one slot which is attributed to an interested client, regardless of its upload rate. This so-called "optimistic unchoking" allows for the discovery of better peers than those currently selected (i.e. those with higher upload rates). This strategy, however, if implemented strictly, would considerably slow down the insertion of newcomers into a running swarm as, they obviously do not have anything to share at the beginning. Thus, clients that have nothing to share are given three time more chances to be selected by the optimistic unchoke. When a client has finished downloading a file it no longer has a download rate from other peers but it can still share (upload) pieces of the file. In this case the choking algorithm is applied by considering upload rate instead. Peers are selected based on how fast they can be uploaded to. This spreads the file faster. Such "seeder" peers that store the whole file are very important to the functioning of a swarm. If a swarm contains no seeders it may lead to a situation in which pieces of the file are missing from the swarm as a whole. In this sense the system requires at least some level of altruistic behaviour from "seeders".
Seriousness (Score:2, Informative)
a problem fixed by the very behaviour of each serious user who downloads then lets the file on his disk (seeding it) 'till it reaches at last a few days there or a good (> 1) share ratio.
True, but as a file transfer system becomes easier for novices to use, it is likely to draw users who aren't "serious", who cancel the upload as soon as the download completes. And if you try to enforce share ratios on a registered tracker, remember that the mean share ratio across all users is exactly 1.0; therefore not everybody can have a cumulative ratio >= 1.0. What happens when demand falls off for a file, and though you leave the upload going, nobody downloads more than a couple megabytes for days?
If it matters, my personal rule when downloading something on BitTorrent or eMule is to let the upload go 24 hours after the download completes, then let it get up to at least ratio>=1.5 or one week, whichever comes first.