Abstract
We study an access trace containing a sample of Wikipedia's traffic over a 107-day period aiming to identify appropriate replication and distribution strategies in a fully decentralized hosting environment. We perform a global analysis of the whole trace, and a detailed analysis of the requests directed to the English edition of Wikipedia. In our study, we classify client requests and examine aspects such as the number of read and save operations, significant load variations and requests for nonexisting pages. We also review proposed decentralized wiki architectures and discuss how they would handle Wikipedia's workload. We conclude that decentralized architectures must focus on applying techniques to efficiently handle read operations while maintaining consistency and dealing with typical issues on decentralized systems such as churn, unbalanced loads and malicious participating nodes.
Original language | English |
---|---|
Pages (from-to) | 1830-1845 |
Number of pages | 16 |
Journal | Computer networks |
Volume | 53 |
Issue number | 11 |
DOIs | |
Publication status | Published - 28 Jul 2009 |
Externally published | Yes |
Keywords
- Decentralized hosting
- P2P
- Wikipedia
- Workload analysis