Thus spake "Vadim Antonov" <avg@exigengroup.com>
Well, maybe, but L2 switches do queueing and drops as well, so there should be some way to indicate which packets to drop, and which to keep. This means that they should be able to look deeper inside frames to extract information for frame classification.
See also: IEEE 802.1p
(Of course, a cleaner architecture would simply map L3 TOS into some L2 TOS bits at the originating hosts, but this just didn't happen...)
This would not work, as the L2 TOS information would be discarded at the first L3 hop. Some vendors' routers map L3 TOS into L2 TOS at each hop (if you enable that functionality) for media which supports L2 TOS. PTP L2 media don't really have this problem, as TOS-based L3 forwarding is capable of prioritizing packets itself, and L2 TOS would be redundant.
One may argue that L2 switches typically are not bottlenecks, and the Internet access circuits effectively limit Ethernet utilization for the exterior traffic. However, there's a potential class of applications in clustered community computing (quite a lot of scientfic simulations, actually) which can generate very high levels of intra-cluster traffic.
Imagine a L2 switch with a phone and a PC on a single 10/100 port. It is trivial to find applications which can temporarily saturate the port, introducing unacceptable jitter to the phone's media streams. There are several million of these ports in active use today, and it doesn't take more than a few minutes to discover how prevalent this particular problem is without some attempt at L2 TOS.
I took your suggestion as a "best effort" below "normal" effort for "community" TOS. I could be mistaken.
That is exactly what i had in mind.
That is also what IEEE 802 had in mind. S