On Wed, Dec 16, 2015 at 4:36 AM, Dave Taht <dave.taht@gmail.com> wrote:
On Tue, Dec 15, 2015 at 4:19 PM, William Herrin <bill@herrin.us> wrote:
From what you've posted, you don't want to detect the difference between a switch and a bridge, you want to detect the difference
To be more clear I wanted to detect if there was more than one bridge on the network, where the bridge being a PITA was a wired/wireless bridge.
Hi Dave, I recommend you stop using the word "bridge." I think see where you're heading with it, but I think you're chasing a blind alley which encourages a false mental model of how layer 2 networks function. You came here for answers. This is one of them. "Bridge" describes a device which existed in layer 2 networks a quarter century ago. You had a 10-base2 ethernet with every station connected to a shared coax wire. Or you had a token ring where each station was wired to the next station in a loop. Or if you were sophisticated you had 10-baseT with a hub that repeated bits from any port to all ports with no concept of packets. And then you had a bridge which could connect these networks together, buffering complete packets and smartly repeating only the packets which belong on the other side. The bridge let you expand past the distance limitations imposed by the ethernet collision domain, and it let you move between two different speed networks. These networks are now largely a historical curiousity. There are no hubs, no 10-base2, no token passing rings. Not any more. Individual stations now connect directly to a bridge device, which these days we often call a "switch." Even where the stations have a shared media (e.g. 802.11), the stations talk to the bridge, not to each other. Bridge specifies a condition that, today, is close enough to always true as makes no difference.
Are you just trying to detect stations behind wireless or do you want to identify segments that are carried over wireless?
The latter.
In this case a routing optimization that works well on wired links was enabled when there were wireless bridges on that segment, leading to some chaos in the originally referenced thread.
The short answer to your question is: No, there is no reliable way for software operating at layer 3 to determine that it's working across links which contain wired or wireless. The longer answer is that you may be asking the wrong question. Layer 2 links transit different kinds of media. Copper. Fiber. Radio. Laser. Sometimes they travel virtual medias where the underlying technologies are deliberately hidden. VPNs. MPLS. Sometimes it starts copper, transits a media converter and ends up fiber. The media types do not, in and of themselves, make much difference with respect to optimizing layer-3 traffic flows. Capacity Loss Latency Jitter These are the factors which matter. A full duplex licensed radio link will tend to have exactly the same characteristics as a copper ethernet across the same distance. An 802.11 radio link will not -- 802.11 is half duplex with retransmission. It will tend to exhibit much higher jitter than the licensed radio link and has a true capacity limit far below the data rate. The layer 2 media type is not known at layer 3. Optimizing for media type is probably barking up the wrong tree anyway. Optimize for the network characteristics which matter. Capacity Loss Latency Jitter These are the things you want to detect and optimize for. Favor higher capacity links. Avoid loss like the plague. Favor low latency. Prefer low jitter. If a link isn't too full and isn't suffering high jitter, you can often estimate capacity by sending some small pings and some large pings and measuring the difference in round-trip latency. But this doesn't work if the link is comprised of parallel paths -- you'll get the capacity of a single path instead of the total capacity. For example, a 128kbps ISDN BRI running multilink PPP will detect as 64kbps. And it doesn't work with half duplex paths where capacity is a stochastic function based on the number of colliding speakers. Loss is the deadly one. Packet loss above a fractional percent kills TCP/IP pretty fast. Detecting it is highly situational. In principle your routers could keep track of packet counts sent and received from each neighbor and communicate them with each other. Then alter the base link metric depending on differences in the counts. I haven't seen this done anywhere though. Except on point to point links the router generally doesn't know or track which neighbor sent which packet. Latency (delay) changes with distance and buffering. Can't really tell the difference between the two from layer 3. Can't measure latency without actively polliing, which consumes capacity. And high jitter renders latency measurements unreliable. Jitter is the degree to which latency changes from packet to packet. Links operating near capacity will express higher jitter as some packets are buffered and some aren't. Links with an underlying error detection and retransmission mechanism (like 802.11) will express higher jitter. Half duplex links with collision avoidance or collision detection (802.11 or old-school 10-baseT) will express higher jitter as packets from simultaneously transmitting stations collide and are delayed with a backoff before retransmission. Jitter can be measured on a link, but it can't be predicted. Once measured, you can't know the cause: whether it's a full duplex link running near capacity or a half duplex link that occasionally has other stations speaking. Which makes it hard to predict whether and how often the jitter will recur. Anyway, the bottom line is that you only think you care whether a link is wired or wireless. What you *really* care about is: Capacity Loss Latency Jitter Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/>