Ethernet in the
term used for a family of standards that define the Network Access layer of the
most common type of LAN used today. The various standards differ in terms of
speeds supported, cable types and the length of cables.The Institute of
Electrical and Electronics Engineers (IEEE) is responsible for defining
the various standards since it took over the process in 1980.
To make it easier to understand Ethernet, its
functions will be discussed in terms of the OSI reference models’ Data Link and
Physical layers. (Remember that Network Access Layer is a combination of these
two layers).
IEEE defines various standards at the
physical layer while it divides the Data Link functions into the following two
sublayers:
·
The 802.3 Media Access Control
(MAC) sublayer
·
The 802.2 Logical Link Control (LLC) sublayer
Even though various physical layer standards
are different and require changes at the layer, each of them use the same 802.3
header and the 802.2 LLC sublayer.
The following sections look at the collision
detection mechanism used by Ethernet and how Ethernet functions at both the
layers.
Collision
Detection in Ethernet
Ethernet is a contention
media access method that allows all hosts in a network to share the
available bandwidth. This means that multiple hosts try to use the media to
transfer traffic. If multiple hosts send traffic at the same time, a collision
can occur resulting in loss of the frames that collided. Ethernet cannot
prevent such collision but it can detect them and take corrective actions to
resolve. It uses the Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) protocol to do so. This is how CSMA/CD works:
1. Hosts looking to transmit a frame listen until Ethernet is not
busy.
2. When Ethernet is not busy, hosts start sending the frame.
3. The source listens to make sure no collision occurred.
4. If a collision occurs, the source hosts send a jamming signal to
notify all hosts of the collision.
5. Each source host randomizes a timer and waits that long before
resending the frame that collided.
CSMA/CD works well but it does create some
performance issues because:
1.
Hosts must wait till the
Ethernet media is not busy before sending frames. This means only one host can
send frames at a time in a collision domain (such as in the case of a network
connected to a hub). This also means that a host can either send or receive at
one time. This logic is called half-duplex.
2.
During a collision, no frame
makes it across the network. Also, the offending hosts must wait a random time
before they can start to resend the frames.
Many networks suffered this sort of
performance degradation due to the use of hubs until switches became
affordable. In fact, statistics showed that anything over 30 percent
utilization caused performance degradation in Ethernet.
Remember that switches break collision
domains by providing a dedicated port to each host. This means that hosts
connected to a switch only need to wait if the switch is sending frames
destined to the host itself.
Half and
Full Duplex Ethernet
In the previous
section, you learned about the logic called Half Duplex in which a
host can only send or receive at one time. In a hub-based network, hosts are
connected in a half-duplex mode because they must be able to detect collisions.
When hosts are
connected to a switch, they can operate at Full duplex. This means
they can send and receive at the same time without worrying about collisions.
This is possible because full duplex uses two pairs of wire instead of one
pair. Using the two pairs, a point-to-point connection is created between the
transmitter of the host to the receiver of the switch and vice versa. So the
host sends and receives frames via different pairs of wires and hence need to
listed to see if it send frames or not. You should note that CSMA/CD is
disabled at both ends when full duplex is used.
Figure Full Duplex
Apart from eliminating collisions, each
device actually gets to use twice the bandwidth available because it now has
same bandwidth on both pairs of wire and each pair is used separately for
sending and receiving.
Figure 1-17 shows how the transmitter
on the host’s interface card is connected to the receiver on the switch
interface while the receiver on the host interface is connected to the
transmitter on the switch interface. Now traffic sent by the host and traffic
sent to the host both have a dedicated path with equal bandwidth. If each path
has a bandwidth of 100Mbps, the host gets 200Mpbs of dedicated bandwidth to the
switch. In case of half-duplex, there would have been only a single path of
100Mbps that would have been used for both receiving and sending traffic.
Ethernet
at the Data Link Layer
Ethernet at Data Link layer is responsible for addressing as well as
framing the packets received from Network Layer and preparing them for the
actual transmission.
Ethernet
Addressing
Ethernet
Addressing identifies either a single device or a group of devices on a LAN and
is called a MAC address. MAC address is 48 bits (6 bytes) long and is
written is hexadecimal format. Cisco devices typically write it in a group of
four hex digits separated by period while most operating systems write it in
groups of two digits separated by a colon. For example, Cisco devices would
write a MAC address as 5022.ab5b.63a9 while most operating systems would write
it as 50:22:ab:5b:63:a9.
A unicast address
identifies a single device. This address is used to identify the source and
destination in a frame. Each LAN interface card has a globally unique MAC
address. The IEEE defines the format and the assignment of addresses.
Figure 48bit MAC address
To keep
addresses unique, each manufacturer of LAN cards is assigned a code called the organizationally
unique identifier (OUI). The first half of every MAC address is the OUI of
the manufacturer. The manufacturer assigns the second half of the address while
ensuring that the number is not used for any other card. The complete MAC
address is then encoded into a ROM chip in the card. Figure 1-18 shows the
composition of a MAC address.
MAC
address can also identify a group of devices. These are called group
addresses. IEEE defines the following two types of group addresses:
·
Broadcast Address – This address
has a value of FFFF.FFFF.FFFF and means that all devices in the network
should process the frame.
·
Multicast Address – Multicast addresses
are used when a frame needs to go to a group of hosts in the network. When IP
multicast packets need to travel over Ethernet a multicast address of
0100.5exx.xxxx is used where xx.xxxx can be any value.
Data
Encapsulation in TCP/IP Model
The last thing you need to know about
TCP/IP model is the Data encapsulation process and PDUs. As in case of the OSI
reference model, the data is encapsulated in a header (and trailer in case of
Network layer) to create a Protocol Data Unit (PDU) and is passed down to the
next layer. Though you are aware of the process, you must know the names of
each layer’s PDU. The PDU in TCP/IP model are:
·
Transport Layer -> Segment
·
Internet Layer -> Packet
·
Network Access Layer -> Frame
Figure 1-24 shows the encapsulation process
in TCP/IP model.
Figure Data encapsulation in TCP/IP Model