In any shared access scheme, the scheduler is key to the performance of the system. Here we take a closer look at a HSUPA (a.k.a. EUL) scheduler and how it functions in practice. With HSUPA the shared resource the scheduler has to control is the uplink interference. This is done via the Serving Grant (SG) concept, where a SG is translated into a particular power ratio (relative to the DPCCH). From a UE point of view this is further translated into a particular data rate. The above is a fairly simplistic explanation, for those interested there is a lot more detail in books, the web and the specs.
In terms of the graph then (click to enlarge), this represents a speedtest on a HSUPA capable live network. The Y axis is kilobits and the X axis is time.
The measured variables are:
Available Power Throughput represents a theoretical maximum throughput the UE could achieve with the power headroom it currently has. Limited by max TB for 10ms TTI (14480) in this case.
Serving Grant Throughput represents a theoretical maximum throughput the UE could achieve with the SG it currently has. This also represents the tolerable UL interference the nodeB can expect from the UE.
Actual Throughput this how much the UE is actually transmitting based on the above and the data it has in the buffer.
Bearing the above in mind we can split the graph in 3 distinct areas:
Power headroom allows for the maximum possible throughput with 10ms TTI. Serving Grant allocated is limiting throughput to a maximum possible of approx. 550kbps. Actual throughput is only around 100Kbps however due to data in the buffer. Sub-optimal utilisation. (Note: this region is also the part of the speedtest where the DL was tested. Hence the UL throughput is low as it is just ACKs/NACKs at the application layer)
Power, Serving Grant and Actual Throughput, peak at the maximum for 10ms TTI. Optimal utilisation. (Note: this region is also the part of the speedtest where the UL was tested)
Serving Grant remains at the maximum, but the UE becomes power limited and hence the actual throughput drops. Sub-optimal utilisation. (Note: this region is also the part of the speedtest where the UL was tested)
So from scheduler perspective, what could be done differently? How could we have optimal utilisation in all areas? The answer lies in the making use of the scheduling information the UE can provide as part of the MAC layer. Specifically the Uplink Power Headroom (UPH) and the Total E-DCH Buffer Status (TEBS). By using the TEBS reporting better the utilisation in region A could be improved. By using the UPH reporting better the utilisation in region C could be improved.