Wednesday, 4 January 2017

HSUPA scheduler, a closer look

In any shared access scheme, the scheduler is key to the performance of the system. Here we take a closer look at a HSUPA (a.k.a. EUL) scheduler and how it functions in practice. With HSUPA the shared resource the scheduler has to control is the uplink interference. This is done via the Serving Grant (SG) concept, where a SG is translated into a particular power ratio (relative to the DPCCH). From a UE point of view this is further translated into a particular data rate. The above is a fairly simplistic explanation, for those interested there is a lot more detail in books, the web and the specs.

In terms of the graph then (click to enlarge), this represents a speedtest on a HSUPA capable live network. The Y axis is kilobits and the X axis is time.

The measured variables are:

Available Power Throughput represents a theoretical maximum throughput the UE could achieve with the power headroom it currently has. Limited by max TB for 10ms TTI (14480) in this case.

Serving Grant Throughput  represents a theoretical maximum throughput the UE could achieve with the SG it currently has. This also represents the tolerable UL interference the nodeB can expect from the UE.

Actual Throughput this how much the UE is actually transmitting based on the above and the data it has in the buffer.

Bearing the above in mind we can split the graph in 3 distinct areas:

Region A
Power headroom allows for the maximum possible throughput with 10ms TTI. Serving Grant allocated is limiting throughput to a maximum possible of approx. 550kbps. Actual throughput is only around 100Kbps however due to data in the buffer. Sub-optimal utilisation. (Note: this region is also the part of the speedtest where the DL was tested. Hence the UL throughput is low as it is just ACKs/NACKs at the application layer)

Region B
Power, Serving Grant and Actual Throughput, peak at the maximum for 10ms TTI. Optimal utilisation. (Note: this region is also the part of the speedtest where the UL was tested)

Region C
Serving Grant remains at the maximum, but the UE becomes power limited and hence the actual throughput drops. Sub-optimal utilisation. (Note: this region is also the part of the speedtest where the UL was tested)

So from scheduler perspective, what could be done differently? How could we have optimal utilisation in all areas? The answer lies in the making use of the scheduling information the UE can provide as part of the MAC layer. Specifically the Uplink Power Headroom (UPH) and the Total E-DCH Buffer Status (TEBS). By using the TEBS reporting better the utilisation in region A could be improved. By using the UPH reporting better the utilisation in region C could be improved. 

Sunday, 18 October 2015

4G RACH optimisation through SON

The scope of LTE SON (Self Optimising Network) is vast and although reading through the standard might make one think that images like the one above are a thing of the past, what vendors offer and operators deploy today is fairly limited. The one that is very obvious in use is ANR (Automatic Neighbour Relations) and true to its name it has made neighbour planning a thing of the past in LTE deployments.

I was recently looking through some air interface logs of one UK operator and I was pleasantly surprised to see one more SON feature in use, namely RACH reporting. RACH performance historically has relied on drive testing to quantify, as a failed procedure is not recorded by the network. With RACH reporting the UE (if supported) can be requested to report how many preambles it used to access the network and if it encountered any contention. In a simple solution this information could simply be recorded statistically for an operator to look at, or for a full SON solution the requested preamble power could be adjusted (either way depending if UEs are reporting too many preambles or too few) or more RACH signatures could be assigned if contention is widely reported (typically the signature pool is statically split between contention based and contention free usage) or some other relevant parameter based change could be automatically triggered.

The procedure itself starts with the UE reporting its support as shown below (click to expand).
The network can then request from the UE to report on the result of the RACH procedure through the UE information request procedure as shown below.
Finally the UE reports the result in the UE Information Response message. In this particular example, two preambles had to be sent as the result of the first one was met with contention.
A simple solution, to an age old mobile network problem. Quite good..

Tuesday, 12 August 2014

Small cells out in the open

As mentioned in previous posts I am a big fan of small cells/femto cells, so it was great to see Vodafone in the UK using the product in a novel way. Essentially they are deploying these in small rural communities with no existing macro coverage, but rather that the more typical operator led installation, they are asking rural communities to contact them and also provide the physical locations for installation and the necessary broadband connectivity. So all Vodafone have to do, is turn up mount the product on a chimney/wall/post and off you go. There is lots more detail here.

These small cells typically radiate around 1W, as compared to the 20-30W of a typical macro base station and can handle around 32 connected users. They are also self configuring (cell ID, PSC, neighbour detection) so require very little or no planning.

Small cells become a lot more interesting (and complicated) when they are deployed in the presence of a macro (so called HetNets), but even so the above story is still very interesting and encouraging to see.

Saturday, 31 May 2014

XLTE & the marketing side of technology

I was recently reading about Verizon's "XLTE" and it got me thinking about the marketing side of technology and specifically mobile technology.

Essentially XLTE is not a new technology, it is just Verizon's deployment of LTE over 20MHz of spectrum. This is something many other countries have deployed from day one, but in the US it has become a big marketing deal. I imagine Verizon paid a lot of money for that additional spectrum and quite a lot to upgrade eNodeBs and antennas, so in order to get a return of investment a big marketing campaign was put into place. But how do you market 20MHz of spectrum? Here in the UK, EE has marketed it as "double speed" (double as their initial deployment was over 10MHz). But I guess that is quite boring. "XLTE" sounds much better.

All this of course is not new. To my recollection, it started with HSDPA. How do you market HSDPA? Surely not as High Speed Downlink Packet Access. A few terms appeared, there was 3G+, 3.5G, Super 3G, Turbo 3G. As HSDPA evolved, we also had HSDPA+ and some operators even called it 4G!

What about WB-AMR? "Do you want a phone that supports Wide Band Adaptive Multi Rate Sir?" Probably not. HD Voice however sounds great.

Needless to say, this will continue. LTE Advanced with Carrier Aggregation is just around the corner (actually launched already in Korea). So, XXXLTE maybe? 4.5G? 5G even? Let's see..

Wednesday, 18 December 2013

Optimal spectrum refarming for LTE

When looking to refarm some spectrum for LTE (e.g. 1800MHz spectrum from GSM) the following simple approach will lead to optimal results.

Start by thinking of how much spectrum you would ideally refarm. This will typically be 20MHz. Assuming this was possible pick the centre frequency for this allocation. This will be your EARFCN. Then look at how much spectrum you can actually refarm. This will typically be less, as the traffic on the legacy RAT might not have reduced enough or frequency re-planning your whole legacy network will take time. Most operators go for 10MHz, but in some cases 5MHz is also used.

Deploy your network.

After some time has passed and more spectrum is available, keep the centre frequency the same and just expand the bandwidth. Some cells might be using 10MHz, some 15MHz or 20MHz but because the centre frequency has not changed, all mobility can be intra-frequency. No need for inter-frequency handovers, no need for additional neighbour planning, no need for measurement gaps, no need for additional SIBs being broadcasted. UEs will seamlesly reselect & handover taking into account the used bandwidth every time as this is broadcasted in the MIB which is read in idle mode and after every handover.

Although the above might sound like the obvious way of doing things, both EE in the UK (see here) and other LTE deployments (see here) don't follow this but rather offset their two bandwidth allocations leading to needless inter-frequency mobility.

Sunday, 8 December 2013

PRACH preamble power considerations in LTE

Unlike UMTS, the PRACH in LTE is used only for the transmission of random access preambles. These are used when the UE wants to access the system from RRC idle, as part of the RRC re-establishment procedure following a radio link failure, during handover or when it finds itself out of sync.

As part of the PRACH procedure the UE needs to determine the power to use for the transmission of the preamble and for this it looks at SIB2 for the preambleInitialReceivedTargetPower IE. As shown from the extract above (taken from a live network) this is expressed in the dBm and in this specific case it is set to -104dBm. So this is the expected power level of the PRACH preamble when it reaches the eNodeB.

What is also broadcasted is the reference signal power, which in our case is set to 18dBm. Based on this and a current measurement of the RSRP, the UE can determine the pathloss. Once it knows the pathloss it can then determine how much power it needs to allocate the PRACH preamble to reach the enodeB at -104dBm.

So lets say that the UE measures an RSRP of -80dBm. Based on the broadcasted reference signal power it can calculate the pathloss, PL = 18 - (-80) = 98dB. This means that for a preamble to reach the eNodeB at -104dBm it needs to be transmitted at PPRACH = -104 + 98 = -6dBm. That is fine.

But what happens if we consider other values of RSRP? For example cell edge? Cell edge can be determined by the value of the qRxLevMin. Looking at SIB1 from the same network we can see that this is set to -128dBm (IE x 2). 

So at an RSRP of -128dBm the pathloss is PL = 18 - (-126) = 144dB. So the UE needs to transmit the preamble at PPRACH = -104 + 144 = 40dBm. Is this ok? Actually no, as LTE UEs are only capable of transmitting at a maximum power of 23dBm. Does this mean the UE does not even go through the PRACH procedure? No, but it will be limited to transmitting at 23dBm meaning that the preamble will reach the eNodeB at - 121dBm, which means that the probability of a successful detection is very low.

In actual fact based on this network we can say that anywhere in the cell where the RSRP is below -109dBm will lead to a power limited PRACH attempt and a lower probability of detection. This is something to think about next time your LTE signal strength is low and your phone seems unresponsive..

Sunday, 27 October 2013

3.2Mbps @ 2003? I don't think so..

Three UK have put up the above graphic on their website, here, depicting their network evolution from a throughput point of view.

It is quite nice to look at but was 3.2Mbps possible in 2003? I don't think so, as at that time only R99 networks were available and the max throughput was 384kbps. This was only increased with the first HSDPA networks in 2005 and even then speeds were limited to 1.8Mbps (category 12 devices).

Let's see how long it takes for them to correct this.. :)