QBone Scavenger Service (QBSS)
QBone Scavenger Service (or simply ``scavenger service'') is a
network mechanism to let users and applications take advantage of
otherwise unused network capacity in a manner that would not
substantially affect performance of the default best-effort class
email@example.com with an open
archive. Anyone interested in scavenger service is encouraged to
subscribe by sending an email
firstname.lastname@example.org; in the body, include the
subscribe qbss-interest Firstname
Scavenger Service FAQ
How does scavenger service actually work?
Informally, scavenger service creates a parallel virtual network with
very scarce capacity. This capacity, however, is elastic and can
expand into capacity of the normal best-effort class of service
whenever the network has spare cycles. The expansion happens with a
very high time granularity: everything not used by the default class
is available for the scavenger class.
Users (or their applications) voluntarily mark some traffic for
scavenger treatment by setting differentiated services code point
(DSCP) in the IP packet headers to binary 001000. Routers put this
traffic into a special queue with very small allocated capacity using
a queuing discipline such as weighted round-robin (WRR), modified
deficit round-robin (MDRR), weighted fair queuing (WFQ), or similar.
scavenger service definition should make the details more clear.
Why didn't you just make scavenger a lower priority class without
minimum departure rate?
With a strict priority treatment scavenger traffic would become
subject to starvation. During periods of congestion lasting for tens
of minutes or more, TCP connections in the scavenger class would time
out. Defining scavenger as simple priority for the default class
would require application developers to make the logic of their
applications more complex (by attempting to reconnect in case of a
timeout). We wanted to make scavenger compatible with existing TCP
Why did you have to specify a globally significant scavenger DSCP?
While within the normal differentiated services framework DSCPs have
only local significance, we found it necessary to have a single
codepoint in all domains. The main argument to make DSCP locally
significant is, essentially: ``Since in a QoS world every domain has
to police DSCPs on every domain boundary, it would not be any harder
to rewrite DSCPs than it would be to police them; having local
flexibility enables one to experiment with more classes of service and
makes for a better use of a scarce resource---64 different DSCPs.''
The argument is mostly correct if one restricts the considered
services to those that have elevated priority semantics (such as a
form of premium or assured service). The antecedent of the argument
is no longer correct for non-elevated priority services such as
scavenger (i.e., those services that do not provide better treatment
but that provide treatment that is either worse than the default, as
is the case with scavenger service, or that is different, but equal,
as would be the case with a service such as the alternative best-effort
service). With non-elevated priority services, one no longer
needs to police DSCPs at every network boundary.
DSCP policing and re-marking functionality is not available on
every router; when it is available for a router, it might not be
supported on every kind of interface; when it is supported, it can
come with a significant performance cost (often more than 50%
packet-per-second rate drop). Having to police and re-mark on every
network boundary has actually been a quite significant practical
hurdle to deployment of inter-domain quality of service so far. Even
those networks that are built using routing equipment that can rewrite
DSCPs without performance degradation on every interface still suffer
from increase of operational complexity. Getting rid of this
requirement has tremendous practical benefit for many networks.
Importantly, having a globally significant DSCP enables one to
deploy scavenger service on a granularity of a single (congested)
network interface rather than on a granularity of a whole network.
In addition, as long as they are DSCP-transparent (at least for
non-elevated global codepoints), uncongested core networks can simply
ignore these markings without affecting the end-to-end service.
Why would anyone possibly mark their traffic for degraded treatment?
There can be a number of reasons:
- One might already self-police today (e.g., try to run ``at
night''---during periods of low network use); scavenger would enable
one to self-police both more easily (no more looking at MRTG graphs)
and more efficiently (one gets all the unused capacity, not just a
fraction of it).
- Some networks charge for usage per-bit; these networks would
naturally charge less (possibly zero) for scavenger service.
- Applications that attempt to use the idle capacity of the network
in same same manner that distributed.net or SETI@Home use idle CPU
cycles would use scavenger.
Why would a network operator support scavenger?
- For a major research and education network, scavenger would
provide an opportunity to run the pipes hotter while enjoying all the
performance benefits of over-provisioning in the default class.
- For a major commercial network, scavenger offers an oppurtunity
for service differentiation (perhaps along the lines of Andrew
Odlyzko's damaged goods doctrine for the Internet).
- A smaller network (one serving end users) might use scavenger
service as a negotiation tool with larger network service providers.
- Support for scavenger in its minimal form (pass the codepoint and
treat scavenger traffic as the default class is treated) costs nothing
and can provide some benefit to customers.
Why do you call it a ``QoS technique'' if the treatment is
actually worse than that of best-effort traffic?
Scavenger is an application of differentiated services quality of
service framework for creation of a service that gives different
treatment to different packets. Some packets are treated worse; this
means that some other packets are treated better. QoS isn't about
treating things better (it doesn't create network capacity)---it's
about treating things non-uniformly.
Did you guys come up with scavenger all by yourself?
We most definitely do not get the credit for the idea of creating a
class of service that receives treatment that is worse that the normal
treatment. We refined the definition, adopted it for existing routing
equipment and incremental deployment properties, made decisions about
the fine points, tested equipment, and deployed the service.
Earlier work includes ``A
Lower Than Best-Effort Per-Hop Behavior'' internet-draft by R. Bless
and K. Wehrle and ``A
Bulk Handling Per-Domain Behavior for Differentiated Services''
internet-draft by B. Carpenter and K. Nichols.
Scavenger Router Configuration Examples
Scavenger Host Configuration
If a unix application with available source code wishes to mark
traffic sent through a particular TCP connection for scavenger
treatment, it can use a
setsockopt() call as follows:
#define IPTOS_QBSS 0x20
int qbss = IPTOS_QBSS;
setsockopt(sock, IPPROTO_IP, IP_TOS, (char *) &qbss, sizeof qbss);
In addition, patches to enable the configuration of scavenger
service are available for the following applications:
Scavenger Deployment Status
Networks that are known to have configured a bottom-feeding queue
for scavenger service traffic on one or more of their router
Several groups are experimenting with QBSS for long-lived,
high-throughput bulk transfers. We are aware of work at:
A number of Internet2 universities have begun marking portions of
their traffic for QBSS. The following graph (generated as part of Internet2 NetFlow Weekly
Reports) represents the percentage of scavenger traffic (octets)
on the Abilene network:
In addition, NetFlow-derived
AS matrices are available from Ohio ITEC Abilene
NetFlow nightly reports page.
Scavenger Router Testing Results
Talks about Scavenger Service
Design Team (Concluded)
QBSS was designed by a design team working within the Internet2 QoS working
group. The following people participated in design team's work:
Dave Hartzell, Great Plains Networks; Simon Leinen, SWITCH;
Will Murray, Cisco; Joe St Sauver, University of Oregon; Stanislav Shalunov,
Internet2 (chair); Ben
Teitelbaum, Internet2. The
desing team has concluded.
You can look at the design
team mailing list archive.
As a result of its work, the following QoS WG documents were
Comments: Stanislav Shalunov