Not that often you see a bunch of people talking about a video you're in, especially so on NANOG. So here goes. BYOC is around 2700 seats. Total attendance was around 11,000. 2Gbps has been saturated at some point every year we have had it. Additional bandwidth is definitely a serious consideration going forward. It is a lot better than the 45mbps or less we dealt with 2010 and prior, but better doesn't mean good enough. Many games these days do depend upon online services, and forced us to look for options. AT&T has been sponsoring since then and we do appreciate it. We have had the potential for DDoS attacks on our minds. Our first option in those cases is blackhole announcements to the carrier for the targeted /32. AT&T did provide address space for us to use so the BYOC was using public IPs, and hopefully the impact of blackholing a single IP could be made minimal. Thankfully we have not yet been targeted, and we can only keep hoping it stays that way. We haven't tackled IPv6 yet since it adds complexity that our primary focus doesn't significantly benefit from yet since most games just don't support it. Our current table switches don't have an RA guard, and will probably require replacement to get ones that are capable. We also re-designed the LAN back in 2011 to break up the giant single broadcast domain down to a subnet per table switch. This has definitely gotten us some flack from the BYOC since it breaks their LAN browsers, but we thought a stable network was more important with how much games have become dependent on stable Internet connectivity. Still trying to find a good way to provide a middle ground for attendees on that one, but I'm sure everyone here would understand how insane a single broadcast domain with 2000+ hosts that aren't under your control is. We have tried to focus on latency on the LAN, however when so many games are no longer LAN oriented Internet connectivity became a dominant issue. Some traffic is routed out a separate lower capacity connection to keep saturation issues from impacting it during the event. Squid and nginx do help with caching, and thankfully Steam migrated to a http distribution method and allows for easy caching. Some other services make it more difficult, but we try our best. Before Steam changed to http distribution there were a few years they helped in providing a local mirror but that seems to have been discontinued with the migration to http. The cache pushed a little over 4Gbps of traffic at peak at the event. The core IT team which handles the network (L2 and above) is about 9 volunteers. The physical infrastructure is our IP & D team, which gets a huge team of volunteers put together in order to get that 13 miles of cable ready between Monday and Wednesday. The event is very volunteer driven, like many LAN parties across the planet. We try to reuse cable from year to year, including loading up the table runs onto a pallet to be used in making new cables out of in future years. I imagine I haven't answered everyone's questions, but hopefully that fills in some of the blanks. If this has anyone considering sponsorship interest in the event the contact email is sponsors(at)quakecon.org. Information is also available on the website http://www.quakecon.org/.