Exploring CEF

There are a few fundamental actions that a router must perform to take a packet coming in one interface and forward it out of another. The router must perform lookups to determine the exit interface and next hop information, it must rewrite the data link layer header and trailer for the egress link, and if there are multiple egress interfaces and load sharing is supported, determine which interface to dispatch the packet on. Cisco has implemented 3 overall methods to perform these actions (and the others I left out) and they call them switching paths. I’ll do my best to summarize all 3 before going a little deeper into the most preferred path, CEF. This article just scratches the surface of packet forwarding methods of any type.

First, we had Process Switching. This is pretty much exactly what it sounds like. When a packet comes in an interface, the router’s general-purpose CPU is interrupted to do all the necessary lookups, (determine next hop IP/exit interface, next hop data link information etc.) header rewrites, and load sharing decisions. This has some obvious drawbacks in that throughput is limited by the CPU, and the CPU still must handle control and management plane functions while handling every packet. This method was developed when CPUs were much slower than they are today, and we didn’t have nearly as advanced parallel processing capabilities.

To overcome having to do processor intensive lookups for each packet, Cisco developed Fast Switching. This is an optimization over complete process switching. When the first packet to a given destination arrives on the router, it is process switched. The outcome of process switching is passed down to a cache that can be used for future packets to the same destination. Accessing forwarding information from the cache is faster and more efficient than performing complete lookups for every packet. It has its drawbacks though. Imagine when a router first boots up. It has no cache entries; thus, the CPU will be very busy trying to process switch all initial packets to build the cache. Also, imagine a router in the core of the internet back then. Packets to all kinds of destination IPs may require forwarding.

Finally, we arrive at the most preferred Cisco switching path, Cisco Express Forwarding or CEF. CEF allows a router to preprogram forwarding and next hop information for all known destinations and their associated next hops. CEF achieves this by maintaining two main tables. The Forwarding Information Base (FIB) and the Adjacency Table. The FIB is constructed by pulling necessary information from the IP routing table (RIB), including the destination prefix, next hop IP address and associated interface. It does not pull down RIB information that is not directly related to packet switching, such as routing protocol administrative distance. The Adjacency Table contains the local router interfaces, associated next hop IP addresses, and data link addressing information, such as the source and destination MAC addresses that it will use on the given link.

Because most routers have routing entries for many destinations that are reachable by a few interfaces, the router can provision pointers to map destination prefixes in the FIB with the correct Adjacency Table entry, allowing quick lookups for forwarding. If equal cost multipath or unequal cost multipath forwarding is being used, CEF can pre allocate forwarding entries for all necessary paths.

This is a tradeoff from Fast Switching in that more memory is potentially consumed by pre provisioning forwarding information for destinations that may never need to be reached.

Demo Network

To dig into CEF a bit more, I built a 3-router topology of Cisco 1921 routers. Different Cisco platforms offer different features and limitations in CEF. Software routers like these probably provide the most CEF flexibility.

Below is my nice hand drawn diagram. R1 and R2 are connected with parallel Gigabit links, R3 is connected to both with serial T1 links. Each router has a loopback IP address consisting of 4 octets of its router number, R1’s loopback is 1.1.1.1. All interfaces are participating in EIGRP AS 1. I configured MAC addresses on all 4 ethernet interfaces, as shown below in the addressing information. Router 1 uses the As MAC addresses, Router 2 uses the Bs MAC addresses. The last two characters of the MAC addresses represent the interface number. Gig0/0 ends in 00 and Gig0/1 ends in 01. I encourage you to take note of the addressing and a screenshot of the diagram to help follow along.

Links:
R1 – R2 Gi0/0 – 10.0.0.0/30
R1 – R2 Gi0/1 – 172.16.0.0/30
R1 – R3 Serial – 192.168.0.0/30
R2 – R3 Serial – 198.51.100.0/30
Loopbacks:
R1 Lo1 1.1.1.1/32
R2 Lo2 2.2.2.2/32
R3 Lo3 3.3.3.3/32
MAC Addresses:
R1 Gi0/0 MAC Address 0000.AAAA.AA00
R1 Gi0/1 MAC Address 0000.AAAA.AA01
R2 Gi0/0 MAC Address 0000.BBBB.BB00
R2 Gi0/1 MAC Address 0000.BBBB.BB01

FIB and Adj Table from the CLI

The Cisco IOS CLI exposes a lot of good information about CEF operation and state. We’ll start with comparing the IOS RIB to the CEF FIB, and then take a look at the Adjacency Table.

Take a moment to compare R1s routing table to the diagram and addressing. 1.1.1.1 is the local loopback, as confirmed by being directly connected to Loopback1. R2’s loopback and the serial link between R2 and R3 have equal cost multipath RIB entries via the parallel Gigabit links to R2.

RoutingLoop_R1#show ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
       a - application route
       + - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is not set

      1.0.0.0/32 is subnetted, 1 subnets
C        1.1.1.1 is directly connected, Loopback1
      2.0.0.0/32 is subnetted, 1 subnets
D        2.2.2.2 [90/130816] via 172.16.0.2, 00:01:44, GigabitEthernet0/1
                 [90/130816] via 10.0.0.2, 00:01:44, GigabitEthernet0/0
      3.0.0.0/32 is subnetted, 1 subnets
D        3.3.3.3 [90/2297856] via 192.168.0.2, 00:01:44, Serial0/0/0
      10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C        10.0.0.0/30 is directly connected, GigabitEthernet0/0
L        10.0.0.1/32 is directly connected, GigabitEthernet0/0
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.0.0/30 is directly connected, GigabitEthernet0/1
L        172.16.0.1/32 is directly connected, GigabitEthernet0/1
      192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks
C        192.168.0.0/30 is directly connected, Serial0/0/0
L        192.168.0.1/32 is directly connected, Serial0/0/0
      198.51.100.0/30 is subnetted, 1 subnets
D        198.51.100.0
           [90/2170112] via 172.16.0.2, 00:01:44, GigabitEthernet0/1
           [90/2170112] via 10.0.0.2, 00:01:44, GigabitEthernet0/0

We can see the CEF FIB by issuing “show ip cef” on R1. Compare this to the routing table and try to justify each FIB entry.

RoutingLoop_R1#show ip cef
Prefix               Next Hop             Interface
0.0.0.0/0            no route
0.0.0.0/8            drop
0.0.0.0/32           receive
1.1.1.1/32           receive              Loopback1
2.2.2.2/32           10.0.0.2             GigabitEthernet0/0
                             172.16.0.2           GigabitEthernet0/1
3.3.3.3/32           192.168.0.2          Serial0/0/0
10.0.0.0/30          attached             GigabitEthernet0/0
10.0.0.0/32          receive              GigabitEthernet0/0
10.0.0.1/32          receive              GigabitEthernet0/0
10.0.0.2/32          attached             GigabitEthernet0/0
10.0.0.3/32          receive              GigabitEthernet0/0
127.0.0.0/8          drop
172.16.0.0/30        attached             GigabitEthernet0/1
172.16.0.0/32        receive              GigabitEthernet0/1
172.16.0.1/32        receive              GigabitEthernet0/1
172.16.0.2/32        attached             GigabitEthernet0/1
172.16.0.3/32        receive              GigabitEthernet0/1
192.168.0.0/30       attached             Serial0/0/0
192.168.0.0/32       receive              Serial0/0/0
192.168.0.1/32       receive              Serial0/0/0
Prefix               Next Hop             Interface
192.168.0.3/32       receive              Serial0/0/0
198.51.100.0/30      192.168.0.2          Serial0/0/0
                                   10.0.0.2             GigabitEthernet0/0
                                   172.16.0.2           GigabitEthernet0/1
224.0.0.0/4          drop
224.0.0.0/24         receive
240.0.0.0/4          drop
255.255.255.255/32   receive

Next, I’ll try to dissect each FIB entry line by line.

RoutingLoop_R1#show ip cef
Prefix               Next Hop             Interface
0.0.0.0/0            no route
As per the routing table, no default route is known, 

0.0.0.0/8            drop
Packets destined to 0.0.0.1 through 0.255.255.255 are dropped. 

0.0.0.0/32           receive
The router will accept packets destined to an unspecified address. This is common in DHCP packets. 

1.1.1.1/32           receive              Loopback1
The router will receive packets destined to 1.1.1.1. This is the local Loopback1 IP address. 

2.2.2.2/32           10.0.0.2             GigabitEthernet0/0
                            172.16.0.2           GigabitEthernet0/1
The router has two next hops and output interfaces to reach 2.2.2.2 via the interfaces specified. This is the R2 loopback. 

3.3.3.3/32           192.168.0.2          Serial0/0/0
R3s loopback is reachable with next hop 192.168.0.2 out interface S0/0/0. 

10.0.0.0/30          attached             GigabitEthernet0/0
This is the network ID of the subnet on Gi0/0. 

10.0.0.0/32          receive              GigabitEthernet0/0
This is the host address of the network. Packets destined to this will be received by the router. 

10.0.0.1/32          receive              GigabitEthernet0/0
This is the IP address configured on Gi0/0. Packets destined to this address will be received by the router. 

10.0.0.2/32          attached             GigabitEthernet0/0
This is the IP address of the adjacent router connected to Gi0/0 (R2) 

10.0.0.3/32          receive              GigabitEthernet0/0
This is the directed broadcast address of the network on Gi0/0. 

127.0.0.0/8          drop
Packets destined to 127./8 are dropped. 

172.16.0.0/30        attached             GigabitEthernet0/1
The entries for Gi0/1 are and S0/0/0 are similar to Gi0/0 so I will omit line by line comments. 

172.16.0.0/32        receive              GigabitEthernet0/1
172.16.0.1/32        receive              GigabitEthernet0/1
172.16.0.2/32        attached             GigabitEthernet0/1
172.16.0.3/32        receive              GigabitEthernet0/1
192.168.0.0/30       attached             Serial0/0/0
192.168.0.0/32       receive              Serial0/0/0
192.168.0.1/32       receive              Serial0/0/0
Prefix               Next Hop             Interface
192.168.0.3/32       receive              Serial0/0/0

198.51.100.0/30      192.168.0.2          Serial0/0/0
                                    10.0.0.2             GigabitEthernet0/0
                                    172.16.0.2           GigabitEthernet0/1
3 way load balancing to reach the R2 to R3 link. R1 is configured for EIGRP UCMP to allow this. 

224.0.0.0/4          drop
Multicast packets are dropped. 

224.0.0.0/24         receive
Link local multicast packets are received by the router. This allows traffic such as EIGRP or OSPF multicast to be received. 

240.0.0.0/4          drop
Class E is dropped. 

Now that we’ve seen the components of the FIB, we can view the adjacency table. This table is much smaller as R1 only has 3 interfaces with adjacent devices. Remember from earlier that CEF uses pointers to map FIB entries to their corresponding adjacencies for forwarding.

Notice the entries here for the two ethernet interfaces. Not only is the next hop IP information provided, but also the source and destination MAC addresses and EtherType. That string of characters between “Encap length” and “Provider:” is structured as <48-bit destination MAC> <48-bit source MAC> <16-bit EtherType>.

0000BBBBBB00 – Destination MAC address. This is the MAC configured on R2 Gi0/0.
0000AAAAAA00 – Source MAC address. This is the MAC address of R1 Gi0/0
0800 – EtherType. This value indicates IPv4 will be encapsulated in the Ethernet frame.

RoutingLoop_R1#show adjacency encapsulation
Protocol Interface                 Address
IP       GigabitEthernet0/0        10.0.0.2(12)
  Encap length 14
  0000BBBBBB000000AAAAAA000800
  Provider: ARPA
IP       GigabitEthernet0/1        172.16.0.2(7)
  Encap length 14
  0000BBBBBB010000AAAAAA010800
  Provider: ARPA
IP       Serial0/0/0               point2point(10)
  Encap length 4
  0F000800
  Provider: HDLC

For simplified output, I temporarily shut down R1 Gi0/1.

RoutingLoop_R1#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
RoutingLoop_R1(config)#interface gigabitEthernet0/1
RoutingLoop_R1(config-if)#shutdown
RoutingLoop_R1(config-if)#end

The output below confirms that for R1 to reach 2.2.2.2, next hop 10.0.0.2 via interface Gi0/0 should be used.

RoutingLoop_R1#show ip cef 2.2.2.2
2.2.2.2/32
  nexthop 10.0.0.2 GigabitEthernet0/0
RoutingLoop_R1#show adjacency gigabitEthernet 0/0 detail
Protocol Interface                 Address
IP       GigabitEthernet0/0        10.0.0.2(22)
                                   0 packets, 0 bytes
                                   epoch 0
                                   sourced in sev-epoch 6
                                   Encap length 14
                                   0000BBBBBB000000AAAAAA000800
                                   ARP

CEF Load Sharing

Now that we’ve seen some of the fundamental constructs of CEF for general packet switching, we can review some of the ways it does load sharing when multiple paths exist a destination. When multiple egress interfaces exist, the router will perform a hash based on the selected algorithm. Most load sharing algorithms will place all packets for a single flow on the same link to prevent out of order packets. The output below displays the options on this particular router and software.

RoutingLoop_R1(config)#ip cef load-sharing algorithm ?
  dpi            Deep Packet Inspection
  include-ports  Algorithm that includes layer 4 ports
  original       Original algorithm
  tunnel         Algorithm for use in tunnel only environments
  universal      Algorithm for use in most environments

dpi – It does what it says on the tin. The router inspects the packets to identify flows and sends each packet of a common flow down the same path.
Include-ports – An addition to the universal algorithm to include transport layer port numbers into the hash to select an egress interface. This can provide more randomness in some situations.
Original – Load sharing per destination. Can cause flow polarization where multiple routers in series make the same hashing decision.
Tunnel – An enhancement for routers that forward a lot of tunnel packets where only a few source and destination IP addresses are present.
Universal – An enhancement of the original algorithm where each router generates a 32-bit ID that is used as a seed for hashing. It is extremely unlikely that two routers in the same network will generate the same seed ID. This adds randomness to each router that prevents flow polarization.

The output below confirms that R1 and R2 are using the universal per-destination load sharing algorithm. After the algorithm is the randomly generated ID that is used as a seed to prevent flow polarization.

RoutingLoop_R1#show cef state
CEF Status:
 RP instance
 common CEF enabled
IPv6 CEF Status:
 CEF disabled/not running
 dCEF disabled/not running
 universal per-destination load sharing algorithm, id CFC24791
IPv4 CEF Status:
 CEF enabled/running
 dCEF disabled/not running
 CEF switching enabled/running
 universal per-destination load sharing algorithm, id CFC24791


RoutingLoop_R2#show cef state
CEF Status:
 RP instance
 common CEF enabled
IPv6 CEF Status:
 CEF disabled/not running
 dCEF disabled/not running
 universal per-destination load sharing algorithm, id E2B381CC
IPv4 CEF Status:
 CEF enabled/running
 dCEF disabled/not running
 CEF switching enabled/running
 universal per-destination load sharing algorithm, id E2B381CC

Using the “show ip cef exact-route” command, we can verify that that port numbers are not being considered by using a source of R1s loopback IP to R2s loopback with different port numbers. No matter which port is specified, the flow will always egress on Gi0/0.

RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50505 2.2.2.2 dest-port 444
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50502 2.2.2.2 dest-port 444
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50502 2.2.2.2 dest-port 441
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50502 2.2.2.2 dest-port 442
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50502 2.2.2.2 dest-port 4850
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 1 2.2.2.2 dest-port 4850
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2

With that test done, I changed R1 to consider source and destination port numbers in its hash. The command issued is “ip cef load-sharing algorithm include-ports source destination”.

RoutingLoop_R1(config)#ip cef load-sharing algorithm include-ports source d
RoutingLoop_R1(config)#$-sharing algorithm include-ports source destination
RoutingLoop_R1(config)#end
RoutingLoop_R1#show cef state
CEF Status:
 RP instance
 common CEF enabled
IPv6 CEF Status:
 CEF disabled/not running
 dCEF disabled/not running
 universal per-destination load sharing algorithm, id CFC24791
IPv4 CEF Status:
 CEF enabled/running
 dCEF disabled/not running
 CEF switching enabled/running
 include-ports source destination per-destination load sharing algorithm, id CFC24791

We can see this change reflected in the “exact-route” CLI output with different port numbers.

RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50505 2.2.2.2 dest-port 443
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/1, addr 172.16.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 50505 2.2.2.2 dest-port 441
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/1, addr 172.16.0.2
RoutingLoop_R1#$ exact-route 1.1.1.1 src-port 1 2.2.2.2 dest-port 4850
1.1.1.1 -> 2.2.2.2 =>IP adj out of GigabitEthernet0/0, addr 10.0.0.2

Load Balancing

Many IP routing protocols and platforms support equal cost load balancing. In the demo network R1 and R2 are connected with parallel, equal cost links. R1s routing table confirms that it has two equal cost routes to reach R2s Loopback, show below.

RoutingLoop_R1#show ip route 2.2.2.2
Routing entry for 2.2.2.2/32
  Known via "eigrp 1", distance 90, metric 130816, type internal
  Redistributing via eigrp 1
  Last update from 172.16.0.2 on GigabitEthernet0/1, 00:15:39 ago
  Routing Descriptor Blocks:
    172.16.0.2, from 172.16.0.2, 00:15:39 ago, via GigabitEthernet0/1
      Route metric is 130816, traffic share count is 1
      Total delay is 5010 microseconds, minimum bandwidth is 1000000 Kbit
      Reliability 255/255, minimum MTU 1500 bytes
      Loading 1/255, Hops 1
  * 10.0.0.2, from 10.0.0.2, 00:15:39 ago, via GigabitEthernet0/0
      Route metric is 130816, traffic share count is 1
      Total delay is 5010 microseconds, minimum bandwidth is 1000000 Kbit
      Reliability 255/255, minimum MTU 1500 bytes
      Loading 1/255, Hops 1

We can see these allocations in CEF by issuing the “show ip cef internal” command. This router platform supports up to 16 hash buckets per destination. In this specific example, only two hash buckets need to be allocated because they’re equal cost. This hash bucket concept will hopefully make more sense when we look at unequal cost load balancing.

RoutingLoop_R1#show ip cef 2.2.2.2 internal 
2.2.2.2/32, epoch 0, RIB[I], refcnt 5, per-destination sharing
  sources: RIB
  ifnums:
    GigabitEthernet0/0(3): 10.0.0.2
    GigabitEthernet0/1(4): 172.16.0.2
  path list 2BE69A4C, 5 locks, per-destination, flags 0x49 [shble, rif, hwcn]
    path 2BE68980, share 1/1, type attached nexthop, for IPv4
      nexthop 10.0.0.2 GigabitEthernet0/0, IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
    path 2BE689EC, share 1/1, type attached nexthop, for IPv4
      nexthop 172.16.0.2 GigabitEthernet0/1, IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
  output chain:
    loadinfo 2C9A101C, per-session, 2 choices, flags 0083, 6 locks
      flags [Per-session, for-rx-IPv4, 2buckets]
      2 hash buckets
        < 0 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 1 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
      Subblocks:
        None

To enable unequal cost load balancing from R1 to R2, I increased the delay value slightly on R1’s Gi0/1 interface and configured EIGRP variance. Now we can see that the two paths to 2.2.2.2 have slightly different metrics and traffic share counts.

RoutingLoop_R1#show ip route 2.2.2.2
Routing entry for 2.2.2.2/32
  Known via "eigrp 1", distance 90, metric 130816, type internal
  Redistributing via eigrp 1
  Last update from 172.16.0.2 on GigabitEthernet0/1, 00:02:36 ago
  Routing Descriptor Blocks:
    172.16.0.2, from 172.16.0.2, 00:02:36 ago, via GigabitEthernet0/1
      Route metric is 134400, traffic share count is 39
      Total delay is 5150 microseconds, minimum bandwidth is 1000000 Kbit
      Reliability 255/255, minimum MTU 1500 bytes
      Loading 1/255, Hops 1
  * 10.0.0.2, from 10.0.0.2, 00:02:36 ago, via GigabitEthernet0/0
      Route metric is 130816, traffic share count is 40
      Total delay is 5010 microseconds, minimum bandwidth is 1000000 Kbit
      Reliability 255/255, minimum MTU 1500 bytes
      Loading 1/255, Hops 1

If we look at the hash buckets again, we can see that all 16 are allocated. However, because the route metrics (and thus traffic share) are so close to each other, 8 hash buckets are allocated for Gi0/0 and 8 are allocated for Gi0/1. Despite having different metrics, these links will be weighted equally by CEF.

RoutingLoop_R1#show ip cef 2.2.2.2 internal
2.2.2.2/32, epoch 0, RIB[I], refcnt 5, per-destination sharing
  sources: RIB
  ifnums:
    GigabitEthernet0/0(3): 10.0.0.2
    GigabitEthernet0/1(4): 172.16.0.2
  path list 2BE69CCC, 3 locks, per-destination, flags 0x49 [shble, rif, hwcn]
    path 2BE915F4, share 77/77, type attached nexthop, for IPv4
      nexthop 172.16.0.2 GigabitEthernet0/1, IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
    path 2BE91588, share 80/80, type attached nexthop, for IPv4
      nexthop 10.0.0.2 GigabitEthernet0/0, IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
  output chain:
    loadinfo 3002A638, per-session, 2 choices, flags 0003, 5 locks
      flags [Per-session, for-rx-IPv4]
      16 hash buckets
        < 0 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 1 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 2 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 3 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 4 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 5 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 6 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 7 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 8 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 9 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <10 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        <11 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <12 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        <13 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <14 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        <15 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
      Subblocks:
        None

I artificially modified the route metrics some more to make Gi0/0 have an even better metric over Gi0/1. This is reflected in the output below where the traffic share ratio is further apart. We should now see more hash buckets allocated to Gi0/0 because of its better route metric.

RoutingLoop_R1#show ip route 2.2.2.2
Routing entry for 2.2.2.2/32
  Known via "eigrp 1", distance 90, metric 130816, type internal
  Redistributing via eigrp 1
  Last update from 172.16.0.2 on GigabitEthernet0/1, 00:00:17 ago
  Routing Descriptor Blocks:
    172.16.0.2, from 172.16.0.2, 00:00:17 ago, via GigabitEthernet0/1
      Route metric is 154112, traffic share count is 17
      Total delay is 6000 microseconds, minimum bandwidth is 3500000 Kbit
      Reliability 255/255, minimum MTU 1500 bytes
      Loading 1/255, Hops 1
  * 10.0.0.2, from 10.0.0.2, 00:00:17 ago, via GigabitEthernet0/0
      Route metric is 130816, traffic share count is 20
      Total delay is 5010 microseconds, minimum bandwidth is 1000000 Kbit
      Reliability 255/255, minimum MTU 1500 bytes
      Loading 1/255, Hops 1 

Notice here that 9/16 hash buckets are allocated to Gi0/0. This will provide slightly unequal cost load balancing. These can be allocated as unequal as 1/16, if the routing metrics call for it.

RoutingLoop_R1#show ip cef 2.2.2.2 internal
2.2.2.2/32, epoch 0, RIB[I], refcnt 5, per-destination sharing
  sources: RIB
  ifnums:
    GigabitEthernet0/0(3): 10.0.0.2
    GigabitEthernet0/1(4): 172.16.0.2
  path list 2BE69A4C, 3 locks, per-destination, flags 0x49 [shble, rif, hwcn]
    path 2BE914B0, share 17/17, type attached nexthop, for IPv4
      nexthop 172.16.0.2 GigabitEthernet0/1, IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
    path 2BE9151C, share 20/20, type attached nexthop, for IPv4
      nexthop 10.0.0.2 GigabitEthernet0/0, IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
  output chain:
    loadinfo 2C9A0DE4, per-session, 2 choices, flags 0003, 5 locks
      flags [Per-session, for-rx-IPv4]
      16 hash buckets
        < 0 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 1 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 2 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 3 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 4 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 5 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 6 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 7 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        < 8 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        < 9 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <10 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        <11 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <12 > IP adj out of GigabitEthernet0/1, addr 172.16.0.2 310D9BE0
        <13 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <14 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
        <15 > IP adj out of GigabitEthernet0/0, addr 10.0.0.2 310D9D40
      Subblocks:
        None

I hope this helped you better understand some of the basics of CEF and how to troubleshoot and verify some of its attributes from the IOS CLI. Thanks for reading!