||A large number of emerging applications such as IPTV, event broadcast, online games and distance learning require the support of live video streaming, yet, this is perhaps the greatest unfulfilled promise of the Internet. The root of the problem is that the Internet by nature, i.e., autonomous, heterogeneous, best-effort, can not provide the services required by streaming applications. The recent development in Peer-to-Peer (P2P) technologies brings new momentum in live video streaming due to the inherent self-scaling property and easy deployment. In spite of its popularity, there is no consensus on how a large-scale P2P live streaming system works. There are two fundamental problems in the design space: topology formulation that relates to how a peer locates the video content from one another and content delivery. Further, there has been little study on the design tradeoffs and large-scale measurements. This thesis fills this gap. We leverage our earlier system, Coolstreaming, which was arguably the earliest large-scale P2P video streaming experiment and was widely referenced in the community as the benchmark (Google entries top 400,000). We have designed and implemented comprehensive logging tools to collect and analyze large sets of traces from real-world broadcasts, from which we establish a theoretical framework that (1) concretely demonstrate the fundamental system design trade-offs and further identify the main performance bottlenecks and key factors behind them. Specifically, we show 1) random topology formulation can lead to convergence and stability; 2) video streaming performance is critically affected by system dynamics, in particular churns; 3) the system exhibits excellent scaling property yet the uploading capacity contributions from peers are highly skewed, in which a small percentage of peers conbtribute most; 4) the scale and streaming performance is largely determined by how well the system can handle the flash crowd in live streaming event.