From Wikipedia, the free encyclopedia - View original article
|Part of a series on|
|Development and societal aspects|
|Non-public file sharing|
|File sharing networks and services|
|By country or region|
Gnutella (// with a silent g, but often //) (possibly by analogy with the GNU Project) is a large peer-to-peer network. It was the first decentralized peer-to-peer network of its kind, leading to other, later networks adopting the model. It celebrated a decade of existence on March 14, 2010 and has a user base in the millions for peer-to-peer file sharing.
In June 2005, gnutella's population was 1.81 million computers increasing to over three million nodes by January 2006. In late 2007, it was the most popular file sharing network on the Internet with an estimated market share of more than 40%.
The first client was developed by Justin Frankel, Gianluca Rubinacci and Tom Pepper of Nullsoft in early 2000, soon after the company's acquisition by AOL. On March 14, the program was made available for download on Nullsoft's servers. The event was prematurely announced on Slashdot, and thousands downloaded the program that day. The source code was to be released later, under the GNU General Public License (GPL).
The next day, AOL stopped the availability of the program over legal concerns and restrained Nullsoft from doing any further work on the project. This did not stop gnutella; after a few days, the protocol had been reverse engineered, and compatible free and open source clones began to appear. This parallel development of different clients by different groups remains the modus operandi of gnutella development today.
The gnutella network is a fully distributed alternative to such semi-centralized systems as FastTrack (KaZaA) and the original Napster. Initial popularity of the network was spurred on by Napster's threatened legal demise in early 2001. This growing surge in popularity revealed the limits of the initial protocol's scalability. In early 2001, variations on the protocol (first implemented in proprietary and closed source clients) allowed an improvement in scalability. Instead of treating every user as client and server, some users were now treated as ultrapeers, routing search requests and responses for users connected to them.
This allowed the network to grow in popularity. In late 2001, the gnutella client LimeWire Basic became free and open source. In February 2002, Morpheus, a commercial file sharing group, abandoned its FastTrack-based peer-to-peer software and released a new client based on the free and open source gnutella client Gnucleus.
The word gnutella today refers not to any one project or piece of software, but to the open protocol used by the various clients.
The name is a portmanteau of GNU and Nutella, the brand name of an Italian hazelnut flavored spread: supposedly, Frankel and Pepper ate a lot of Nutella working on the original project, and intended to license their finished program under the GNU General Public License. Gnutella is not associated with the GNU project or its own peer-to-peer network, GNUnet.
On October 26, 2010, the popular gnutella client LimeWire was ordered shutdown by Judge Kimba Wood of the United States District Court for the Southern District of New York when she signed a Consent Decree to which recording industry plaintiffs and LimeWire had agreed. This event was the likely cause of a notable drop in the size of the network, because, while negotiating the injunction, LimeWire staff had inserted remote-disabling code into the software. As the injunction came into force, users who had installed affected versions (newer than 5.5.10) were cut off from the P2P network. Since LimeWire was free software, nothing had prevented the creation of forks that omitted the disabling code, as long as LimeWire trademarks were not used. The shutdown did not affect, for example, FrostWire, a fork of LimeWire created in 2004 that carries neither the remote-disabling code nor adware.
On November 9, 2010, LimeWire was resurrected by a secret team of developers and named LimeWire Pirate Edition. It was based on LimeWire 5.6 BETA. This version had its server dependencies removed and all the PRO features enabled for free.
To envision how gnutella originally worked, imagine a large circle of users (called nodes), each of whom have gnutella client software. On initial startup, the client software must bootstrap and find at least one other node. Various methods have been used for this, including a pre-existing address list of possibly working nodes shipped with the software, using updated web caches of known nodes (called Gnutella Web Caches), UDP host caches and, rarely, even IRC. Once connected, the client requests a list of working addresses. The client tries to connect to the nodes it was shipped, as well as nodes it receives from other clients, until it reaches a certain quota. It connects to only that many nodes, locally caching the addresses it has not yet tried, and discards the addresses it tried that were invalid.
When the user wants to do a search, the client sends the request to each actively connected node. In version 0.4 of the protocol, the number of actively connected nodes for a client was quite small (around 5), so each node then forwarded the request to all its actively connected nodes, and they in turn forwarded the request, and so on, until the packet reached a predetermined number of hops from the sender (maximum 7).
Since version 0.6 (2002), gnutella is a composite network made of leaf nodes and ultra nodes (also called ultrapeers). The leaf nodes are connected to a small number of ultrapeers (typically 3) while each ultrapeer is connected to more than 32 other ultrapeers. With this higher outdegree, the maximum number of hops a query can travel was lowered to 4.
Leaves and ultrapeers use the Query Routing Protocol to exchange a Query Routing Table (QRT), a table of 64 Ki-slots and up to 2 Mi-slots consisting of hashed keywords. A leaf node sends its QRT to each of the ultrapeers it is connected to, and ultrapeers merge the QRT of all their leaves (downsized to 128 Ki-slots) plus their own QRT (if they share files) and exchange that with their own neighbours. Query routing is then done by hashing the words of the query and seeing whether all of them match in the QRT. Ultrapeers do that check before forwarding a query to a leaf node, and also before forwarding the query to a peer ultra node provided this is the last hop the query can travel.
If a search request turns up a result, the node that has the result contacts the searcher. In the classic gnutella protocol, response messages were sent back along the route the query came through, as the query itself did not contain identifying information of the node. This scheme was later revised, so that search results now are delivered over User Datagram Protocol (UDP) directly to the node that initiated the search, usually an ultrapeer of the node. Thus, in the current protocol, the queries carry the IP address and port number of either node. This lowers the amount of traffic routed through the gnutella network, making it significantly more scalable.
If the user decides to download the file, they negotiate the file transfer. If the node which has the requested file is not firewalled, the querying node can connect to it directly. However, if the node is firewalled, stopping the source node from receiving incoming connections, the client wanting to download a file sends it a so-called push request to the server for the remote client to initiate the connection instead (to push the file). At first, these push requests were routed along the original chain it used to send the query. This was rather unreliable because routes would often break and routed packets are always subject to flow control. Therefore so called push proxies were introduced. These are usually the ultrapeers of a leaf node and they are announced in search results. The client connects to one of these push proxies using a HTTP request and the proxy sends a push request to leaf on behalf of the client. Normally, it is also possible to send a push request over UDP to the push proxy which is more efficient than using TCP. Push proxies have two advantages: First, ultrapeer-leaf connections are more stable than routes which makes push requests much more reliable. Second, it reduces the amount of traffic routed through the gnutella network.
Finally, when a user disconnects, the client software saves the list of nodes that it was actively connected to and those collected from pong packets for use the next time it attempts to connect so that it becomes independent from any kind of bootstrap services.
In practice, this method of searching on the gnutella network was often unreliable. Each node is a regular computer user; as such, they are constantly connecting and disconnecting, so the network is never completely stable. Also, the bandwidth cost of searching on gnutella grew exponentially to the number of connected users, often saturating connections and rendering slower nodes useless. Therefore, search requests would often be dropped, and most queries reached only a very small part of the network. This observation identified the gnutella network as an unscalable distributed system, and inspired the development of distributed hash tables, which are much more scalable but support only exact-match, rather than keyword, search.
To address the problems of bottlenecks, gnutella developers implemented a tiered system of ultrapeers and leaves. Instead of all nodes being considered equal, nodes entering into the network were kept at the 'edge' of the network as a leaf, not responsible for any routing, and nodes which were capable of routing messages were promoted to ultrapeers, which would accept leaf connections and route searches and network maintenance messages. This allowed searches to propagate further through the network, and allowed for numerous alterations in the topology which have improved the efficiency and scalability greatly.
Additionally gnutella adopted a number of other techniques to reduce traffic overhead and make searches more efficient. Most notable are Query Routing Protocol (QRP) and Dynamic Querying (DQ). With QRP a search reaches only those clients which are likely to have the files, so rare files searches grow vastly more efficient, and with DQ the search stops as soon as the program has acquired enough search results, which vastly reduces the amount of traffic caused by popular searches. Gnutella For Users has a vast amount of information about these and other improvements to gnutella in user-friendly style.
One of the benefits of having gnutella so decentralized is to make it very difficult to shut the network down and to make it a network in which the users are the only ones who can decide which content will be available. Unlike Napster, where the entire network relied on the central server, gnutella cannot be shut down by shutting down any one node and it is impossible for any company to control the contents of the network, which is also due to the many free and open source gnutella clients which share the network.
The development of the gnutella protocol is currently led by the Gnutella Developers Forum (The GDF). Many protocol extensions have been and are being developed by the software vendors and free gnutella developers of the GDF. These extensions include intelligent query routing, SHA-1 checksums, query hit transmission via UDP, querying via UDP, dynamic queries via TCP, file transfers via UDP, XML meta data, source exchange (also termed the download mesh) and parallel downloading in slices (swarming).
There are efforts to finalize these protocol extensions in the gnutella 0.6 specification at the gnutella protocol development website. The gnutella 0.4 standard, although still being the latest protocol specification since all extensions only exist as proposals so far, is outdated. In fact, it is hard or impossible to connect today with the 0.4 handshake and according to developers in the GDF, version 0.6 is what new developers should pursue using the work-in-progress specifications.
The gnutella protocol remains under development and in spite of attempts to make a clean break with the complexity inherited from the old gnutella 0.4 and to design a clean new message architecture, it is still one of the most successful file-sharing protocols to date.
The Gnutella2 protocol, often referred to as G2, is, despite its name, not a successor protocol of gnutella, but rather a fork. A sore point with many gnutella developers is that the Gnutella2 name conveys an upgrade or superiority, which led to a Gnutella2 flame war. Other criticism included the use of the gnutella network to bootstrap G2 peers and poor documentation of the G2 protocol. Additionally, the search retries of the Shareaza client, which was one of the initial G2 clients, could unnecessarily burden the gnutella network.
The fork took place in 2002 and both protocols have undergone significant iterations since that time. G2 has advantages and disadvantages compared to gnutella. An advantage often cited is Gnutella2's hybrid search is more efficient than the original gnutella query flooding, which was used in 2002. An advantage for gnutella is its user population numbers in the millions, whereas the G2 network is approximately an order of magnitude smaller. It is difficult to compare the protocols in their current form; the individual client choice will probably have as much an effect to an end user on either network.
The following tables compare general and technical information for a number of applications supporting the gnutella network. The tables do not attempt to give a complete list of gnutella clients. The tables are limited to clients that can participate in the current gnutella network.
|Acquisition||OS X||Proprietary||2.2 (v223) (November 19, 2010) [±]||LimeWire|
|BearShare (pre-version 6)||Windows||Proprietary||22.214.171.124||Original work|
|Cabos||Java||GNU GPL||0.8.2 (February 9, 2010) [±]||LimeWire|
|FilesWire (P2P)||Java||Proprietary||Beta 1.1 (2007)||Original work|
|FrostWire (pre-version 5)||Java||GNU GPL||5.6.9 (December 2, 2013) [±]||LimeWire|
|giFT (Gnutella plug-in)||Cross-platform||GNU GPL||0.0.11 (2006-08-06)||Original work|
|Gnucleus-GnucDNA||Windows||GNU GPL, LGPL||126.96.36.199 (June 17, 2005) [±]||Original work|
|gtk-gnutella||Cross-platform||GNU GPL||1.0.1 (December 31, 2013) [±]||Original work|
|iMesh (pre-version 6)||Windows||Proprietary||?||GnucDNA|
|KCeasy||Windows||GNU GPL||0.19-rc1 (February 3, 2008) [±]||giFT|
|Kiwi Alpha||Windows||GNU GPL||188.8.131.52 (June 17, 2005) [±]||GnucDNA|
|LimeWire||Java||GNU GPL||5.5.16 (September 30, 2010) [±]||Original work|
|LimeWire Pirate Edition||Java||GNU GPL||5.6.2||LimeWire|
|Morpheus||Windows||Proprietary||5.55.1 (November 15, 2007) [±]||GnucDNA|
|MP3 Rocket (before Jan 2011)||Java||GNU GPL||6.4.4 (January 5, 2014) [±]||LimeWire|
|Phex||Java||GNU GPL||184.108.40.206 (February 1, 2009) [±]||Original work|
|Poisoned||OS X||GNU GPL||0.5191 (August 8, 2006) [±]||giFT|
|Shareaza||Windows||GNU GPL||220.127.116.11 (November 24, 2013) [±]||Original work|
|Symella||Symbian||GNU GPL||1.41 (December 11, 2009) [±]||Original work|
|Zultrax||Windows||Proprietary||4.33 (April 2009)||Original work|
|Client||Hash search||Chat[›]||Buddy list||Handles large files (> 4 GiB)||Unicode-compatible query routing||UPnP port mapping[›]||NAT traversal||NAT port mapping||RUDP[›]||TCP push proxy||UDP push proxy||Ultrapeer||GWebCache[›]||UDP host cache||THEX||TLS||Other|
|giFT (core & plug-ins)||Yes||N/A||N/A||No||No||No||No||No||No||Yes a[›]||No||No b[›]||Yes||No||No||No||-|
|GnucDNA c[›]||Yes||N/A||N/A||No||No||No||No||No||No||Yes||No||No b[›]||Yes||No||No||No||-|
|gtk-gnutella||Yes d[›]||No||No||Yes||Yes||Yes||Yes||Yes||Yes i[›]||Yes||Yes||Yes||No (Dropped)||Yes||Yes||Yes||IPv6, DHT, GUESS|
|LimeWireh[›]||Yes d[›]||Yes||GMail or XMPP||Yes||Yes||Yes||Yes e[›]||Yes g[›]||Yes||Yes||Yes||Yes||Yes||Yes||Yes||Yes||DHT|
|Shareaza||Yes||Yes||No||Yes||No||Yes||Yes||Yes||No||Yes||Yes||Yes||Yes||Yesf[›]||Yes||No||G2, BT, eD2k, IRC|
^ Chat: Refers to direct client-to-client chat; not IRC chat, which is often also available in the same application through an embedded HTTP browser window.
^ UPnP port mapping: Automatically configures port forwarding in routers or combination modem/gateways which support UPnP control.
^ RUDP: The Reliable UDP protocol provides NAT-to-NAT transfers, sometimes called Firewall-to-Firewall or "hole-punching", in cases where port-forwarding is not or cannot be done by the user.
^ GWebCache: As GWCs had a history of problems with traffic overload and long-term reliability, UDP host caches became the preferred bootstrap method; though some GWCs remain available for the sake of older software.
^ a: Client mode only, as a dependent leaf on ultrapeers.
^ b: Not high outdegree, so is unusable in its current form.
^ c: Version 0.9.2.7
^ d: Via the Kademlia-based Mojito DHT network supported only by LimeWire and gtk-gnutella (starting with version r15750); this is completely different from SHA-1 searches supported by most gnutella clients.
^ e: Port triggering or firewall to firewall (FW2FW).
^ socks: Via SOCKS proxy which can tunnel over SSH.
^ f: Since version 18.104.22.168
^ g: Automatic with UPnP, or manual configuration in LimeWire firewall options
^ h: As the LimeWire client is no longer available clients that share most of LimeWire's code base, like FrostWire, can provide an alternative.
^ i: gtk-gnutella version 0.98.4 and later.
|This article needs additional citations for verification. (July 2013)|