Quakecon: Network Operations Center tour
Non-work, work related information. Many NANOG geeks might be interested in this video tour of the Quakecon NOC tour. As any ISP operator knows, gamers complain faster about problems than any NMS, so you've got to admire the bravery of any NOC in the middle of a gaming convention floor. What Powers Quakecon | Network Operations Center Tour https://www.youtube.com/watch?v=mOv62lBdlXU
highlights: "happy and blinking" "two firewalls for the two att 1gig links, and two spare doing ....." catalyst 6500's Also the 3750 on top of the services rack is funny... because empty. On Sat, Aug 1, 2015 at 3:27 PM, Sean Donelan <sean@donelan.com> wrote:
Non-work, work related information. Many NANOG geeks might be interested in this video tour of the Quakecon NOC tour. As any ISP operator knows, gamers complain faster about problems than any NMS, so you've got to admire the bravery of any NOC in the middle of a gaming convention floor.
What Powers Quakecon | Network Operations Center Tour https://www.youtube.com/watch?v=mOv62lBdlXU
It would have been more interesting to see: -- a network weather map -- the ELK implementation -- actual cache statistics (historically steam/game downloads are not cahce'able) Thanks for the share though Sean! On Sat, Aug 1, 2015 at 9:16 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
highlights: "happy and blinking" "two firewalls for the two att 1gig links, and two spare doing ....."
catalyst 6500's
Also the 3750 on top of the services rack is funny... because empty.
On Sat, Aug 1, 2015 at 3:27 PM, Sean Donelan <sean@donelan.com> wrote:
Non-work, work related information. Many NANOG geeks might be interested in this video tour of the Quakecon NOC tour. As any ISP operator knows, gamers complain faster about problems than any NMS, so you've got to admire the bravery of any NOC in the middle of a gaming convention floor.
What Powers Quakecon | Network Operations Center Tour https://www.youtube.com/watch?v=mOv62lBdlXU
-- Miano, Steven M. http://stevenmiano.com
* mianosm@gmail.com (Steven Miano) [Sun 02 Aug 2015, 03:52 CEST]:
It would have been more interesting to see:
-- a network weather map -- the ELK implementation -- actual cache statistics (historically steam/game downloads are not cahce'able)
Not quite true according to http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-instal... Also, 2 Gbps for 4,400 people? Pretty lackluster compared to European events. 30C3 had 100 Gbps to the conference building. And no NAT: every host got real IP addresses (IPv4 + IPv6). -- Niels.
Also, 2 Gbps for 4,400 people? Pretty lackluster compared to European events. 30C3 had 100 Gbps to the conference building. And no NAT: every host got real IP addresses (IPv4 + IPv6).
ietf, >1k people, easily fits in 10g, but tries to have two for redundancy. also no nat, no firewall, and even ipv6. but absorbing or combatting scans and other attacks cause complexity one would prefer to avoid. in praha, there was even a tkip attack, or so it is believed; turned off tkip. the quakecon net was explained very poorly. what in particular provides game-quality latency, or lack thereof? with only 2g, i guess i can understand the cache. decent bandwidth would reduce complexity. and the network is flat? randy
* randy@psg.com (Randy Bush) [Sun 02 Aug 2015, 13:37 CEST]:
ietf, >1k people, easily fits in 10g, but tries to have two for redundancy. also no nat, no firewall, and even ipv6. but absorbing or combatting scans and other attacks cause complexity one would prefer to avoid. in praha, there was even a tkip attack, or so it is believed; turned off tkip.
Didn't the IETF already deprecate TKIP?
the quakecon net was explained very poorly. what in particular provides game-quality latency, or lack thereof? with only 2g, i guess i can understand the cache. decent bandwidth would reduce complexity. and the network is flat?
Cabling up 4,400 ports does take a lot of effort, though. The QuakeCon video was typical for a server guy talking about network: with a focus on the network periphery, i.e. some servers supporting the network. I guess a tale of punching 300-odd patchpanels is not that captivating to everybody out there. -- Niels.
On Sun, Aug 2, 2015 at 7:56 AM, Niels Bakker <niels=nanog@bakker.net> wrote:
I guess a tale of punching 300-odd patchpanels is not that captivating to everybody out there.
I find this hard to believe. :) I was hoping for more 'how the network is built' (flat? segmented? any security protections so competitors can't kill off their competition?) and ideally some discussion of why the decisions made a difference. (what tradeoffs were made and why?)
On 2 Aug 2015, at 22:32, Christopher Morrow wrote:
any security protections so competitors can't kill off their competition?)
It would be interesting to learn whether they saw any DDoS attacks or cheating attempts during competitive play, or even casual non-competitive play amongst attendees. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
any security protections so competitors can't kill off their competition?)
It would be interesting to learn whether they saw any DDoS attacks or cheating attempts during competitive play, or even casual non-competitive play amongst attendees.
I wonder if that would be a reason for the relatively anemic 1Gb Internet pipe-- making sure that a DDoS couldn't push enough packets through to inconvenience the LAN party. (Disclaimer: $DAYJOB did the audio/visual/lighting for QuakeCon but we had nothing to do with the network and I was utterly uninvolved in any way, so my speculation is based on no information obtained from outside my own skull.) -- Dave Pooser Cat-Herder-in-Chief, Pooserville.com
On 2 Aug 2015, at 22:44, Dave Pooser wrote:
I wonder if that would be a reason for the relatively anemic 1Gb Internet
pipe-- making sure that a DDoS couldn't push enough packets through to inconvenience the LAN party.
While increasing bandwidth is not a viable DDoS defense tactic, decreasing it isn't one, either. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
While increasing bandwidth to the endpoint isn't viable wouldn't increasing the edge bandwidth out to the ISP be a start in the right direction? I would assume this would a start to the problem if your attacks were volumetric. Once the bandwidth is there you can look at mitigation before it reaches the endpoint, in this case the computers on the floor (assuming no NAT). On 2 Aug 2015 16:51, "Roland Dobbins" <rdobbins@arbor.net> wrote:
On 2 Aug 2015, at 22:44, Dave Pooser wrote:
I wonder if that would be a reason for the relatively anemic 1Gb Internet
pipe-- making sure that a DDoS couldn't push enough packets through to inconvenience the LAN party.
While increasing bandwidth is not a viable DDoS defense tactic, decreasing it isn't one, either.
----------------------------------- Roland Dobbins <rdobbins@arbor.net>
On 2 Aug 2015, at 22:56, Alistair Mackenzie wrote:
I would assume this would a start to the problem if your attacks were volumetric.
In a world of 430gb/sec reflection/amplification DDoS attacks, not really. ;> Just increasing bandwidth has never been a viable DDoS defense tactic, due to the extreme asymmetry of resource ratios in favor of the attackers. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
It's completely reasonable when the world at large is only secondary to the local, on-net operations. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest Internet Exchange http://www.midwest-ix.com ----- Original Message ----- From: "Roland Dobbins" <rdobbins@arbor.net> To: "nanog list" <nanog@nanog.org> Sent: Sunday, August 2, 2015 10:50:05 AM Subject: Re: Quakecon: Network Operations Center tour On 2 Aug 2015, at 22:44, Dave Pooser wrote:
I wonder if that would be a reason for the relatively anemic 1Gb Internet
pipe-- making sure that a DDoS couldn't push enough packets through to inconvenience the LAN party.
While increasing bandwidth is not a viable DDoS defense tactic, decreasing it isn't one, either. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
It most certainly does. If the core of the mission is local LAN play and your Internet connection fills up.... who gives a shit? The games play on. If your 500 megabit corporate connection gets a 20 terabit DDoS, your RDP session to the finance department will continue to hum along just fine. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com ----- Original Message ----- From: "Roland Dobbins" <rdobbins@arbor.net> To: "nanog list" <nanog@nanog.org> Sent: Sunday, August 2, 2015 11:23:18 AM Subject: Re: Quakecon: Network Operations Center tour On 2 Aug 2015, at 22:56, Mike Hammett wrote:
It's completely reasonable when the world at large is only secondary to the local, on-net operations.
It has nothing to do with DDoS. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
On 2 Aug 2015, at 23:49, Mike Hammett wrote:
If the core of the mission is local LAN play and your Internet connection fills up
You're assuming the DDoS attack originates from outside the local network(s). I was curious as to whether they'd seen any *internal* DDoS attacks. And again, external bandwidth doesn't matter for externally-sourced DDoS attacks. If the attacker wishes to do so, he'll completely overwhelm your transit bandwidth.
.... who gives a shit? The games play on.
No, they don't, if they require a connection across the Internet to game servers for matchmaking/auth purposes, etc. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
Not that often you see a bunch of people talking about a video you're in, especially so on NANOG. So here goes. BYOC is around 2700 seats. Total attendance was around 11,000. 2Gbps has been saturated at some point every year we have had it. Additional bandwidth is definitely a serious consideration going forward. It is a lot better than the 45mbps or less we dealt with 2010 and prior, but better doesn't mean good enough. Many games these days do depend upon online services, and forced us to look for options. AT&T has been sponsoring since then and we do appreciate it. We have had the potential for DDoS attacks on our minds. Our first option in those cases is blackhole announcements to the carrier for the targeted /32. AT&T did provide address space for us to use so the BYOC was using public IPs, and hopefully the impact of blackholing a single IP could be made minimal. Thankfully we have not yet been targeted, and we can only keep hoping it stays that way. We haven't tackled IPv6 yet since it adds complexity that our primary focus doesn't significantly benefit from yet since most games just don't support it. Our current table switches don't have an RA guard, and will probably require replacement to get ones that are capable. We also re-designed the LAN back in 2011 to break up the giant single broadcast domain down to a subnet per table switch. This has definitely gotten us some flack from the BYOC since it breaks their LAN browsers, but we thought a stable network was more important with how much games have become dependent on stable Internet connectivity. Still trying to find a good way to provide a middle ground for attendees on that one, but I'm sure everyone here would understand how insane a single broadcast domain with 2000+ hosts that aren't under your control is. We have tried to focus on latency on the LAN, however when so many games are no longer LAN oriented Internet connectivity became a dominant issue. Some traffic is routed out a separate lower capacity connection to keep saturation issues from impacting it during the event. Squid and nginx do help with caching, and thankfully Steam migrated to a http distribution method and allows for easy caching. Some other services make it more difficult, but we try our best. Before Steam changed to http distribution there were a few years they helped in providing a local mirror but that seems to have been discontinued with the migration to http. The cache pushed a little over 4Gbps of traffic at peak at the event. The core IT team which handles the network (L2 and above) is about 9 volunteers. The physical infrastructure is our IP & D team, which gets a huge team of volunteers put together in order to get that 13 miles of cable ready between Monday and Wednesday. The event is very volunteer driven, like many LAN parties across the planet. We try to reuse cable from year to year, including loading up the table runs onto a pallet to be used in making new cables out of in future years. I imagine I haven't answered everyone's questions, but hopefully that fills in some of the blanks. If this has anyone considering sponsorship interest in the event the contact email is sponsors(at)quakecon.org. Information is also available on the website http://www.quakecon.org/.
so it is heavily routed using L3 on the core 'switches'? makes a lot of sense. Lots of switches will happily forward layer 3 packets.
and a lot of so-called switches will happily *route* at L3, which is i think the point. in this case, heavily subnetting a LAN, it makes a lot of sense. otoh, i did not believe in the fad of using 65xxs at the bgp global edge. while it was temporarily cheap, two years later not a lot of folk had that many boats which needed anchoring. randy
On 02/08/2015 23:30, Randy Bush wrote:
otoh, i did not believe in the fad of using 65xxs at the bgp global edge. while it was temporarily cheap, two years later not a lot of folk had that many boats which needed anchoring.
A juniper EX9200 is a switch and a cisco sup2t box is a router. The vendor said it so it must be true. As anchors, I would be hard put to make a choice between a 6500 and a 7500, which was a fine router in its day but alas only had a useful lifetime of a small number of years. Obsolescence happens. The distinction between layer 2 and layer 3 capable kit is not that important these days. What's important is whether the device's packet or frame forwarding capabilities are a good match for the expected workload and that the total operating cost over the depreciation period works. Nick
On Sun, Aug 2, 2015 at 6:57 PM, Nick Hilliard <nick@foobar.org> wrote:
As anchors, I would be hard put to make a choice between a 6500 and a 7500, which was a fine router in its day but alas only had a useful lifetime of a small number of years. Obsolescence happens.
isn't some of L3's edge still 7500's? I think some of 703/702's edges are still 7500's even.
On Sun, Aug 2, 2015 at 9:46 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sun, Aug 2, 2015 at 6:57 PM, Nick Hilliard <nick@foobar.org> wrote:
As anchors, I would be hard put to make a choice between a 6500 and a 7500, which was a fine router in its day but alas only had a useful lifetime of a small number of years. Obsolescence happens.
isn't some of L3's edge still 7500's? I think some of 703/702's edges are still 7500's even.
"Last Date of Support: HW The last date to receive service and support for the product. After this date, all support services for the product are unavailable, and the product becomes obsolete. December 31, 2012" oh .. maybe they really are all gone :)
On 3 Aug 2015, at 8:47, Christopher Morrow wrote:
oh .. maybe they really are all gone :)
People still run things long after EoS, heh. A 6500 *with a Sup2T* is OK at the edge, for now - it has decent ASICs which support critical edge features, unlike its predecessors. Myself, I'd much rather use an ASR9K or CRS (I don't know much about Juniper routers) as an edge device. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
On Sun, Aug 2, 2015 at 4:59 PM, Randy Bush <randy@psg.com> wrote:
josh,
thanks for the more technical scoop. now i get it a bit better.
We also re-designed the LAN back in 2011 to break up the giant single broadcast domain down to a subnet per table switch.
so it is heavily routed using L3 on the core 'switches'? makes a lot of sense.
Single core switch, the Cisco 6509 VE in the video, handles routing between subnets. Table switches have an IP for management and monitoring. We have some 3750Gs for additional routing in other parts of the event.
On 02.08.2015 23:36, Josh Hoppes wrote:
We haven't tackled IPv6 yet since it adds complexity that our primary focus doesn't significantly benefit from yet since most games just don't support it. Our current table switches don't have an RA guard, and will probably require replacement to get ones that are capable.
The lack of RA-guard/DHCPv6-guard can still bite you. A client can still send rogue RAs and set up a rogue DNS-server and start hijacking traffic as AAAA is preferred over A records by most operating systems these days. IPv6 first-hop security is really underrated these days and not providing the clients with IPv6 does not exclude IPv6 as a potential attack vector.
We also re-designed the LAN back in 2011 to break up the giant single broadcast domain down to a subnet per table switch. This has definitely gotten us some flack from the BYOC since it breaks their LAN browsers, but we thought a stable network was more important with how much games have become dependent on stable Internet connectivity. Still trying to find a good way to provide a middle ground for attendees on that one, but I'm sure everyone here would understand how insane a single broadcast domain with 2000+ hosts that aren't under your control is. We have tried to focus on latency on the LAN, however when so many games are no longer LAN oriented Internet connectivity became a dominant issue.
At The Gathering we solved this by using ip helper-address for specific game ports and a broadcast forwarder daemon (which has been made publicly available). It sounds really ugly, but it works pretty good, just make sure to rate-limit the broadcast as it can be pretty ugly in the case of a potential loop/broadcast-storm.
Some traffic is routed out a separate lower capacity connection to keep saturation issues from impacting it during the event.
Squid and nginx do help with caching, and thankfully Steam migrated to a http distribution method and allows for easy caching. Some other services make it more difficult, but we try our best. Before Steam changed to http distribution there were a few years they helped in providing a local mirror but that seems to have been discontinued with the migration to http. The cache pushed a little over 4Gbps of traffic at peak at the event.
The core IT team which handles the network (L2 and above) is about 9 volunteers. The physical infrastructure is our IP & D team, which gets a huge team of volunteers put together in order to get that 13 miles of cable ready between Monday and Wednesday. The event is very volunteer driven, like many LAN parties across the planet. We try to reuse cable from year to year, including loading up the table runs onto a pallet to be used in making new cables out of in future years.
Thanks for the write-up, it's always cool to read how others in the "LAN-party scene" does things! :) -- Harald
I help with an event that has a pretty decent sized lan party as well. We're not just focused on the lan party, more of a rock concerts - huge arcade - panels - lan party type event. It was a few years ago that a mincraft "griefing" team came and attacked the network internally. At the time the BYOC LAN party I think was using 3com switches on the edge. Griefers were doing MAC flooding or something that was causing the switches to fall over. And not just the switch they were connected to it was bringing down many of them. They were doing it in spurts and the people dealing with the network thought the issue was misbehaving equipment for a bit (it seemed foreign at that time that someone from the community would be doing it.) Mind you the people running things (volunteers) are running on little sleep, had no time to build out security appliances let alone watch a bunch of logs. They're pretty smart but you know - you get a bunch of smart people together they all bicker about how to do things their way. In the end, one of the griefers friends went and told on them, and that's how they were discovered. Badges yanked and banned for life. Most of these cons and events run on surplus hardware. Granted, these days there is more and more higher end stuff being cast away. More and more 10 gig, Juniper, Force10 and other decent equipment coming into play. Getting bandwidth into the events is a pain. Huge venues are meant for large corporate events not lower budget cons and festivals. Venue pricing I believe is 750-1500$ per megabit. 100 megabit = $75,000 for the weekend. One year I rememeber there being a switch with 8 vlans on it sitting outside the back door with 8 clear modems spread out all blinking away. Geeks get creative. These days, a random family next door gets their business class FiOS paid for the entire year (with a good TV package) in return for a weekend or two a year of it being slammed. But that isn't keeping up with demand. I think sponsorship is in our future as far as bandwidth goes. Internally, the hotels charge for any ports. So if you need cross connects between rooms, it's pretty expensive. And it's managed by them so running tagged traffic is a no go an other things. So out comes miles of fiber and rolls of gaffers tape every year. And miles of cat5. The lan party is fairly concentrated, but other departments all have other network needs. HD video streams outbound, voip telephones, ARTNet, etc. It's crazy. But I guess it's a good way to keep skills sharp and learn new things. Also, Steam and others should make a caching server solution similar to what exists in Apple OSX server. - Ethan
Venue Internet is the bane of events. Crazy expensive. Almost as expensive as a laborer in Chicago to move your box from the truck to your booth. ;-) ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest Internet Exchange http://www.midwest-ix.com ----- Original Message ----- From: "Ethan" <telmnstr@757.org> To: nanog@nanog.org Sent: Monday, August 3, 2015 9:58:35 AM Subject: Re: Quakecon: Network Operations Center tour I help with an event that has a pretty decent sized lan party as well. We're not just focused on the lan party, more of a rock concerts - huge arcade - panels - lan party type event. It was a few years ago that a mincraft "griefing" team came and attacked the network internally. At the time the BYOC LAN party I think was using 3com switches on the edge. Griefers were doing MAC flooding or something that was causing the switches to fall over. And not just the switch they were connected to it was bringing down many of them. They were doing it in spurts and the people dealing with the network thought the issue was misbehaving equipment for a bit (it seemed foreign at that time that someone from the community would be doing it.) Mind you the people running things (volunteers) are running on little sleep, had no time to build out security appliances let alone watch a bunch of logs. They're pretty smart but you know - you get a bunch of smart people together they all bicker about how to do things their way. In the end, one of the griefers friends went and told on them, and that's how they were discovered. Badges yanked and banned for life. Most of these cons and events run on surplus hardware. Granted, these days there is more and more higher end stuff being cast away. More and more 10 gig, Juniper, Force10 and other decent equipment coming into play. Getting bandwidth into the events is a pain. Huge venues are meant for large corporate events not lower budget cons and festivals. Venue pricing I believe is 750-1500$ per megabit. 100 megabit = $75,000 for the weekend. One year I rememeber there being a switch with 8 vlans on it sitting outside the back door with 8 clear modems spread out all blinking away. Geeks get creative. These days, a random family next door gets their business class FiOS paid for the entire year (with a good TV package) in return for a weekend or two a year of it being slammed. But that isn't keeping up with demand. I think sponsorship is in our future as far as bandwidth goes. Internally, the hotels charge for any ports. So if you need cross connects between rooms, it's pretty expensive. And it's managed by them so running tagged traffic is a no go an other things. So out comes miles of fiber and rolls of gaffers tape every year. And miles of cat5. The lan party is fairly concentrated, but other departments all have other network needs. HD video streams outbound, voip telephones, ARTNet, etc. It's crazy. But I guess it's a good way to keep skills sharp and learn new things. Also, Steam and others should make a caching server solution similar to what exists in Apple OSX server. - Ethan
hi ethan On 08/03/15 at 10:58am, Ethan wrote:
Getting bandwidth into the events is a pain. Huge venues are meant for large corporate events not lower budget cons and festivals. Venue pricing I believe is 750-1500$ per megabit. 100 megabit = $75,000 for the weekend. One year I rememeber there being a switch with 8 vlans on it sitting outside the back door with 8 clear modems spread out all blinking away.
for connectivity, does the hotels and convention centers still have wifi jammers so you cannot use your own 56Mbit wifi to get connection to the outside world ? if possible, stick a bunch of dark mirrored-glass covered vans outside the event for wifi access the "expensive part" is due to labor unions that control the workers and everything else working the capitalistic "supply" and demand model to the max. the unions disallow you to carry your own gear from your car to the event which is good and bad ... i dont buy their $10 budweiser, $5 water, etc especially when no outside drinks allowed inside the event
Geeks get creative.
good thing .... and no unions to control what we did/do ... another ( 40yr old ) boat that has long since sailed since the days of why we had to fight off the unions in the electronics industrt ... pixie dust alvin
On Mon, Aug 03, 2015 at 01:52:17PM -0700, alvin nanog wrote:
hi ethan
On 08/03/15 at 10:58am, Ethan wrote:
Getting bandwidth into the events is a pain. Huge venues are meant for large corporate events not lower budget cons and festivals. Venue pricing I believe is 750-1500$ per megabit. 100 megabit = $75,000 for the weekend. One year I rememeber there being a switch with 8 vlans on it sitting outside the back door with 8 clear modems spread out all blinking away.
for connectivity, does the hotels and convention centers still have wifi jammers so you cannot use your own 56Mbit wifi to get connection to the outside world ? if possible, stick a bunch of dark mirrored-glass covered vans outside the event for wifi access
In the US, the FCC has ruled that wifi jammers violate one or more parts of the FCC Rules and Regs. Marriott hotels paid a USD600K fine. A quick Google search on "FCC hotel jammer" pulls up a great many hits, of which these are the first seven: Jammer Enforcement | FCC.gov https://www.fcc.gov/.../jamme... U.S. Federal Communications Commission Federal law prohibits the operation, marketing, or sale of any type of jamming equipment, including devices that interfere with cellular and Personal ... Marriott to Pay $600K to Resolve WiFi-Blocking ... - FCC https://www.fcc.gov/.../marrio... U.S. Federal Communications Commission Oct 3, 2014 - Hotel Operator Admits Employees Improperly Used Wi-Fi Monitoring ... The complainant alleged that the Gaylord Opryland was “jamming ... WARNING: Wi-Fi Blocking is Prohibited | FCC.gov https://www.fcc.gov/.../warnin... U.S. Federal Communications Commission Jan 27, 2015 - which hotels and other commercial establishments block wireless ... into this kind of unlawful activity by the operator of a resort hotel and ... FCC warns hotels against blocking guests' wi-fi www.consumeraffairs.com/.../fcc-warns-hotels-against-blocking-guests-... Jan 28, 2015 - Hotels, miffed by guests who used their own wi-fi hotspots instead of paying ... It's illegal to jam legal radio transmissions of any kind, FCC vows tough enforcement ... Some had argued that jamming wi-fi and cellphone calls is ... Hotels ask FCC for permission to block guests' personal Wi ... www.pcworld.com/.../hotel-group-asks-fcc-for-permission-to-... PC World Dec 22, 2014 - Marriott argued some hotspot blocking may be justified, as long as the hotel isn't using illegal signal jammers. Unlicensed Wi-Fi hotspots ... FCC fines Marriott $600,000 for blocking guests' Wi-Fi ... www.cnn.com/2014/10/03/travel/marriott-fcc-wi-fi-fine/ CNN Oct 4, 2014 - It's the first time the FCC has investigated a hotel property for ... sense, where someone uses a jammer device to block wireless signals. Instead ... How This Hotel Made Sure Your Wi-Fi Hotspot Sucked ... readwrite.com/2014/.../marriott-nashville-opryland-jams-wifi-internet-wt... Oct 4, 2014 - Caught by FCC for Wi-Fi jamming, Marriott's still not sorry. -- Mike Andrews, W5EGO mikea@mikea.ath.cx Tired old sysadmin
On 4 Aug 2015, at 4:03, mikea wrote:
In the US, the FCC has ruled that wifi jammers violate one or more parts of the FCC Rules and Regs.
I travel quite a bit worldwide, and I've never run into this. I run my portable AP on 5GHz, FWIW. ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
The WiFi jammers have an interesting MO. They don't throw up static on the frequency, that would also block their own wifi. They spoof de-authentication packets. I've been looking for a way to detect this kind of jamming because my WiFi sucks and I live next to three hotels, what you get for living in downtown Atlanta. On Mon, Aug 3, 2015 at 5:09 PM, Roland Dobbins <rdobbins@arbor.net> wrote:
On 4 Aug 2015, at 4:03, mikea wrote:
In the US, the FCC has ruled that wifi jammers violate one or more parts
of the FCC Rules and Regs.
I travel quite a bit worldwide, and I've never run into this. I run my portable AP on 5GHz, FWIW.
----------------------------------- Roland Dobbins <rdobbins@arbor.net>
On 4 Aug 2015, at 4:38, Mr Bugs wrote:
They don't throw up static on the frequency, that would also block their own wifi. They spoof de-authentication packets.
Sure - I'm saying, I don't see this anywhere, is it possible most of this activity is on 2.4GHz and not 5GHz? ----------------------------------- Roland Dobbins <rdobbins@arbor.net>
hi mr bugs :-) On 08/03/15 at 05:38pm, Mr Bugs wrote:
The WiFi jammers have an interesting MO. They don't throw up static on the frequency, that would also block their own wifi. They spoof de-authentication packets. I've been looking for a way to detect this kind of jamming because my WiFi sucks and I live next to three hotels, what you get for living in downtown Atlanta.
i forgot if kismet showed signal strengths of the wifi ap's ... "stronger" signal wins over weaker signal strengths might not be a jamming issue ?? kismet and tcpdump might be able to show you the packets you're looking for ? what happens if you put up a properly designed wire mess around the exterior windows of your house/condo/aptr?? i'd wag/blindly say the area is probably full of rogue wifi ap's floating around where evergbody is trying to wardrive each other and pick up un-suspecting traveling visitor's login and passwd info ... signals bouncing off steel/concrete is not ez to filter out what should be random background white noise if you're sitting next to the radiating source .. pixie dust alvin # DDoS-Mitigator.net # DDoS-Simulator.net
3. Aug 2015 21:38 by bugs@debmi.com:
The WiFi jammers have an interesting MO. They don't throw up static on the frequency, that would also block their own wifi. They spoof de-authentication packets. I've been looking for a way to detect this kind of jamming because my WiFi sucks and I live next to three hotels, what you get for living in downtown Atlanta.
Blocking WiFi (jamming or deauth attacks) isn't allowed. The Marriott recently got slapped with a fine for doing so. Tell the FCC that the local hotels are doing it: https://www.fcc.gov/document/warning-wi-fi-blocking-prohibited http://arstechnica.com/tech-policy/2015/01/fcc-blocking-wi-fi-in-hotels-is-p... https://www.fcc.gov/encyclopedia/jammer-enforcement https://transition.fcc.gov/eb/jammerenforcement/jamfaq.pdf
On Sun, 2 Aug 2015, Dave Pooser wrote:
I wonder if that would be a reason for the relatively anemic 1Gb Internet pipe-- making sure that a DDoS couldn't push enough packets through to inconvenience the LAN party.
I was involved in delivering 1GigE to Dreamhack in 2001 which at the time (if I remember correctly), 4500 computers that participants brought with them. Usually these events nowadays tend to use 5-20 gigabit/s for that amount of people, so 2x1GE is just not enough. Already in 2001 that GigE was fully loaded after 1-2 days. -- Mikael Abrahamsson email: swmike@swm.pp.se
I recently wrapped up a 1300 players with gigabit connections where we had a single 5gig link. We never saturated the link and peaked at 3.92Gbps for a new minutes. Bandwidth usage peaks on the first day and settles down after that (the event was during an entire weekend starting on friday). If I recall correctly, average was around 2Gpbs. We did not have a steam/web cache and I expect it might reduce even more the actual load on actual BW usage. On 8/2/2015 7:32 AM, Randy Bush wrote:
Also, 2 Gbps for 4,400 people? Pretty lackluster compared to European events. 30C3 had 100 Gbps to the conference building. And no NAT: every host got real IP addresses (IPv4 + IPv6). ietf, >1k people, easily fits in 10g, but tries to have two for redundancy. also no nat, no firewall, and even ipv6. but absorbing or combatting scans and other attacks cause complexity one would prefer to avoid. in praha, there was even a tkip attack, or so it is believed; turned off tkip.
the quakecon net was explained very poorly. what in particular provides game-quality latency, or lack thereof? with only 2g, i guess i can understand the cache. decent bandwidth would reduce complexity. and the network is flat?
randy
On Sun, 2 Aug 2015, Niels Bakker wrote:
Also, 2 Gbps for 4,400 people? Pretty lackluster compared to European events. 30C3 had 100 Gbps to the conference building. And no NAT: every host got real IP addresses (IPv4 + IPv6).
Quakecon is essentially a giant LAN party. Bring Your Own Computer (BYOC), and there are big gaming rigs at Quakecon, and compete on the LAN. There isn't that much Internet traffic. There is only 100Mbps wired to each gaming station. I'm not a quake fanatic, I don't know what are the important network metrics for a good gaming experience. But I assume the important metrics are local, and they install a big central server complex in the center of the room. I'm assuming the critical lag is between the central server and the competitors; not the Internet. Otherwise they could have all stayed home and played in their basements across the Internet. Latency is probably more important than bulk bandwidth.
On 01.08.2015 21:27, Sean Donelan wrote:
What Powers Quakecon | Network Operations Center Tour https://www.youtube.com/watch?v=mOv62lBdlXU Cool stuff!
For reference here are the blog for the tech-crew at the worlds second largest LAN-party, The Gathering: http://technical.gathering.org/ A few highlights: * Over 12,000 Gigabit ports, 500 * 10Gigabit ports, 50 * 40Gigabit ports (not all utilized of course). * Gigabit to all participants. * Dual-stack public IPv4 and IPv6 to all participants. * 30Gbit internet connection (upgradeable if needed). * Zero-touch provisioning of all edge switches. Most of the NMS and provisioning systems are made in-house and are available on github (https://github.com/tech-server/) and all configuration files are released to the public after each event on ftp://ftp.gathering.org (seems to be down at the moment). -- Harald
Very interesting. I still have in ~/ a 6509 config I did for an early Quakecon (or some predecessor or similar event) as a favor for a friend in ~2003. The more things change... BTW, ISTR there's some dark fiber between Anatole and INFOMART. I'm sure there's somebody in the 'mart who could provide $REALLY_FAST connectivity if the fiber is still in place. On Sat, Aug 1, 2015 at 2:27 PM, Sean Donelan <sean@donelan.com> wrote:
Non-work, work related information. Many NANOG geeks might be interested in this video tour of the Quakecon NOC tour. As any ISP operator knows, gamers complain faster about problems than any NMS, so you've got to admire the bravery of any NOC in the middle of a gaming convention floor.
What Powers Quakecon | Network Operations Center Tour https://www.youtube.com/watch?v=mOv62lBdlXU
participants (21)
-
Alistair Mackenzie
-
alvin nanog
-
Christopher Morrow
-
Dave Pooser
-
Ethan
-
Harald F. Karlsen
-
Josh Hoppes
-
Laurent Dumont
-
Mikael Abrahamsson
-
Mike Hammett
-
mikea
-
Mr Bugs
-
Nick Hilliard
-
Niels Bakker
-
Nikolay Shopik
-
Randy Bush
-
Roland Dobbins
-
Sam Thomas
-
Sean Donelan
-
Steven Miano
-
tqr2813d376cjozqap1l@tutanota.com