UltraReliable and LowLatency Wireless Communication: Tail, Risk and Scale
Abstract
Ensuring ultrareliable and lowlatency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utilitybased network design approaches, in which relying on average quantities (e.g., average throughput, average delay and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture, and topology (across access, edge, and core) and decisionmaking under uncertainty is sorely lacking. The overarching goal of this article is a first step to fill this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a plethora of techniques and methodologies pertaining to the requirements of ultrareliable and lowlatency communication, as well as their applications through selected use cases. These results provide crisp insights for the design of lowlatency and highreliable wireless networks.
I Introduction
The phenomenal growth of data traffic spurred by the internetofthings (IoT) applications ranging from machinetype communications (MTC) to missioncritical communications (autonomous driving, drones and augmented/virtual reality) are posing unprecedented challenges in terms of capacity, latency, reliability, and scalability. This is further exacerbated by: i) a growing network size and increasing interactions between nodes; ii) a high level of uncertainty due to random changes in the topology; and iii) a heterogeneity across applications, networks and devices. The stringent requirements of these new applications warrant a paradigm shift from reactive and centralized networks towards massive, lowlatency, ultrareliable and proactive 5G networks. Up until now, humancentric communication networks have been engineered with a focus on improving network capacity with little attention to latency or reliability, while assuming few users.
Achieving ultrareliable and lowlatency communication (URLLC) represents one of the major challenges facing 5G networks. URLLC introduces a plethora of challenges in terms of system design. While enhanced mobile broadband (eMBB) aims at high spectral efficiency, it can also rely on hybrid automatic repeat request (HARQ) retransmissions to achieve high reliability. This is, however, not the case for URLLC due to the hard latency constraints. Moreover, while ensuring URLLC at a link level in controlled environments is relatively easy, doing it at a network level and over a wide area and in remote scenarios (e.g., remote surgery) is notoriously difficult. This is due to the fact that for local area use cases latency is mainly due to the wireless media access, whereas wide area scenarios suffer from latency due to intermediate nodes/paths, fronthaul/backhaul and the core/cloud. Moreover, the typical block error rate (BLER) of 4G systems is which can be achieved by channel coding (e.g., Turbo code) and retransmission mechanisms (e.g., via HARQ). By contrast, the performance requirements of URLLC are more stringent with a target BLER of depending on the use case. From a physicallayer perspective, the URLLC design is challenging as it ought to satisfy two conflicting requirements: low latency and ultrahigh reliability. On the one hand, minimizing latency mandates the use of short packets which in turns causes a severe degradation in channel coding gain. On the other hand, ensuring reliability requires more resources (e.g., parity, redundancy, and retransmissions) albeit increasing latency (notably for timedomain redundancy). Furthermore, URLLC calls for a system design in which all users (including cell edge users) connected to the radio access network must receive equal grade of service, for which the outage capacity is of interest, as opposed to the Ergodic capacity considered in 4G.
If successful, URLLC will unleash a plethora of novel applications and digitize a multitude of verticals. For instance, the targeted ms latency time (and even lower) is crucial in the use of haptic feedback and realtime sensors to allow doctors to examine patients’ bodies from a remote operating room. Similarly, the construction industry can operate heavy machinery remotely and minimize other potential hazards. For sports fans, instead of watching NBA games on TV, using a virtual reality (VR) headset allows to have a degree courtside view, feeling the intensity of the crowd from the comfort of the home. The endtoend latency for XR (augmented, virtual and immersive reality) represents a serious challenge that ought to be tackled. Likewise, ultrahigh reliability in terms of successful packet rate delivery, which may be as high as (or even ), will help automate factories, spearhead remote monitoring and so forth. Undoubtedly, these technological advances will only be possible with a scalable, ultrareliable and lowlatency network.
In essence, as shown in Figure 1, URLLC can be broken down into three major building blocks, namely: (i) risk, (ii) tail, and (iii) scale.

Risk: risk is naturally encountered when dealing with decision making under uncertainty, when channels are timevarying, and in the presence of network dynamics. Here, decentralized or semicentralized algorithms providing performance guarantees and robustness are ate stake.

Tail: the notion of tail behavior in wireless systems is inherently related to the tail of random traffic demand, tail of latency distribution, intra/intercell interference, and users that are at the cell edge, powerlimited, or in deep fade, that needs to be optimized. Therefore, a principled framework and mathematical tools that characterize these tails focusing on percentiles and extreme events are needed.

Scale: this is motivated by the sheer amount of devices, antennas, sensors and other nodes which pose serious challenges in terms of resource allocation and network design. In contrast to cumbersome and timeconsuming MonteCarlo simulations, mathematical tools providing a tractable formulation, analysis and crisp insights are needed.
The article is structured as follows: to guide the readers and set the stage for the technical part, definitions of latency and reliability are presented in Section II. Section III delves into the details of some of the key enablers of URLLC, and section IV examines several tradeoffs cognizant of the URLLC characteristics. Next, Section V provides a stateoftheart summary of the most recent and relevant works, while Section VI presents a variety of tools and techniques tailored to the unique features of URLLC (risk, scale and tail). Finally, we illustrate through selected use cases the usefulness of some of the methodologies in Section VII followed by concluding remarks.
Ii Definitions
Iia Latency

Endtoend (E2E) latency: E2E latency includes the overtheair transmission delay, queuing delay, processing/computing delay and retransmission (if and when needed). Ensuring a roundtrip latency of ms and owing to the speed of light constraints ( km/s), the maximum distance at which a receiver can be located is approximately km.

User plane latency (3GPP) [1]: defined as the oneway time it takes to successfully deliver an application layer packet/message from the radio protocol layer ingress point to the radio protocol ingress point of the radio interface, in either uplink or downlink in the network for a given service in unloaded conditions (assuming the user equipment (UE) is in active state). The minimum requirements for user plane latency are 4 ms for eMBB and 1 ms for URLLC assuming a single user.

Control plane latency (3GPP) [1]: defined as the transition time from a most “battery efficient” state (e.g., idle state) to the start of continuous data transfer (e.g. active state). The minimum requirement for control plane latency is 20 ms.
IiB Reliability
In general, reliability is defined as the probability that a data of size is successfully transferred within a time period . That is, reliability stipulates that packets are successfully delivered and the latency bound is satisfied. However, other definitions can be encountered:

Reliability (3GPP) [1]: capability of transmitting a given amount of traffic within a predetermined time duration with high success probability. The minimum requirement for reliability is success probability of transmitting a layer 2 protocol data unit of 32 bytes within 1 ms.

Reliability per node: defined as the transmission error probability, queuing delay violation probability and proactive packet dropping probability.

Control channel reliability: defined as the probability of successfully decoding the scheduling grant or other metadata.

Availability: defined as the probability that a given service is available (i.e., coverage). For instance, availability means that one user among 10000 does not receive proper coverage.
We underscore the fact that the URLLC service requirements are endtoend, whereas the 3GPP and ITU requirements focus on the oneway radio latency over the 5G radio network [1].
Iii Key Enablers for URLLC
In this section, key enablers for lowlatency and highreliability communication are examined. An overview of some of these enablers is highlighted in Figure 2, while Table I provides a comparison between 4G and 5G.
Iiia LowLatency
A latency breakdown yields deterministic and random components that are either fixed or scale with the number of nodes. While the deterministic component defines the minimum latency, the random components impact the latency’s distribution and more specifically its tails. Deterministic latency components consist of the time to transmit information and overhead (i.e. parity bits, reference signals, and control data), and waiting times between transmissions. The random components include the time to retransmit information and overhead when necessary, queuing delays, random backoff times, and other processing/computing delays. In what follows, various enablers for lowlatency communication are examined:

Short transmission time interval (TTI), short frame structure and HARQ: reducing the TTI duration (e.g., from ms in LTE to ms as in 5G new radio) using fewer OFDM^{1}^{1}12 OFDM symbols = 71.43 microseconds with a spacing of 30 KHz. symbols per TTI and shortening OFDM symbols via wider subcarrier spacing as well as lowering HARQ roundtrip time (RTT) reduce latency. This is because less time is needed to have enough HARQ retransmissions to meet a reliability target and tolerate more queuing delay before the deadline (owing to the HARQ retransmissions constraints). Furthermore, reducing the OFDM symbol duration increases subcarrier spacing and hence fewer resource blocks are available in the frequency domain causing more queuing effect. On the flipside, shorter TTI duration introduces more control overhead thereby reducing capacity (lower availability of resources for other URLLC data transmissions). This shortcoming can be alleviated using grantfree transmission in the uplink. On the downlink, longer TTIs are needed at high offered loads to cope with nonnegligible queuing delays [2].

eMBB/URLLC multiplexing: Although a static/semistatic resource partitioning between eMBB and URLLC transmissions may be preferable in terms of latency/reliability viewpoint, it is inefficient in terms of system resource utilization, necessitating a dynamic multiplexing [2]. Achieving high system reliability for URLLC requires more frequencydomain resources to be allocated to an uplink (UL) transmission instead of boosting power on narrowband resources. This means wideband resources are needed for URLLC UL transmission to achieve high reliability with low latency. In addition, intelligent scheduling techniques to preempt other scheduled traffic are needed when a lowlatency packet arrives in the middle of the frame (i.e., puncturing the current eMBB transmission). At the same time, the eMBB traffic should be minimally impacted when maximizing the URLLC outage capacity.

Edge caching, computing and slicing: pushing caching and computing resources to the edge has been shown to significantly reduce latency [3, 4]. This trend will continue unabated with the advent of resourceintensive applications (e.g. AR/VR). Network slicing is also set to play a pivotal role in allocating dedicated resources (i.e., caching/bandwidth/computing) for missioncritical services.

Ondevice machine learning / Artificial intelligence (AI) on edge: Machine learning (ML) lies at the foundation of proactive and lowlatency networks. Traditional ML is based on the precept of a single node (in a centralized location) with full access to the global dataset and a massive amount of storage and computing, sifting through this data for classification and inference. Nevertheless, this approach is clearly inadequate for latencysensitive and highreliability applications, sparking a huge interest in distributed machine learning (e.g, deep learning coined by Lecun and Hinton). This mandates a novel scalable and distributed machine learning framework, mimicking the brain, in which the training data describing the problem is stored in a distributed fashion across a number of interconnected nodes and the optimization problem is solved collectively. This is the next frontier for AI or machine learning, also referred to as AI on edge or ondevice machine learning [5].

Grantfree vs. grantbased access: This relates to dynamic UL scheduling or contentionbased access for sporadic/bursty traffic versus persistent scheduling for periodic traffic. Fast uplink access is advocated for devices on a priori basis at the expense of lower capacity (due to resource preallocation). For semipersistent scheduling unused resources can be reallocated to the eMBB traffic. For groupbased semipersistent scheduling, contention based access is carried out within the group of users with similar characteristics which helps minimize collisions. In this case the base station (BS) controls the load and dynamically adjusts the size of the resource pool. The BS could also proactively schedule a retransmission opportunity shared by a group of UEs with similar traffic for better resource utilization.

Nonorthogonal multiple access (NOMA): NOMA (and its variants) reduces latency by supporting far more users than conventional orthogonalbased approaches leveraging power or code domain multiplexing in the uplink then using successive interference cancellation (SIC), or more advanced receiver schemes (e.g., message passing or Turbo reception). However, issues related to imperfect channel state information (CSI), user ordering, processing delay due to multiplexing and other dynamics which will impact latency (and reliability) are not well understood.

Lowearth orbit (LEO) satellites and unmanned aerial vehicles/systems: for longrange applications and rural areas, LEO satellites are the only way to reduce backhaul latency. In addition, the use of unmanned aerial systems can help reduce latency for missioncritical applications.

Joint flexible resource allocation for UL/DL: for TDD systems, a joint UL/DL allocation and the interplay of time slot length versus the switching cost (or turn around) is needed. This topic has been studied in the context of LTEA. Here, for FDD both LTE evolution and new radio (NR) are investigated, whereas for TDD only NR is investigated since LTE TDD is not considered for URLLC enhancements.
IiiB Reliability
The main factors affecting reliability stem from: i) collisions with other users due to uncoordinated channel access; ii) coexistence with other systems in the same frequency channels; iii) interference from users in adjacent channels; iv) Doppler shifts from moving devices, v) difficulty of synchronization, outdated CSI, timevarying channel effects or delayed packet reception. Reliability at the physical layer level (typically expressed in block error rate) depends on factors such as the channel, constellation, error detection codes, modulation technique, diversity, retransmission mechanisms, etc. A variety of techniques to increase reliability include using lowrate codes to have enough redundancy in poor channel conditions, retransmissions for error correction, and ARQ at the transport layer. Crucially, diversity and beamforming provide multiple independent paths from the transmitter to the receiver and to boost the received signaltonoise ratio (SNR). Frequency diversity occurs when information is transmitted over a frequencyselective channel whereas time diversity occurs when a forward error correction (FEC) codeword is spread out over many coherence times so that it sees many different channels (for e.g., using HARQ). Multiuser diversity arises when a transmission is relayed by different users from the source to the destination. In what follows, various enablers for reliability are discussed.

Multiconnectivity and harnessing time/frequency/RATs diversity: while diversity is a must, time diversity is not a viable solution when the tolerable latency is shorter than the channel coherence time or when the reliability requirements are very stringent. On the other hand frequency diversity may not scale with the number of users/devices, making spatial diversity the only solution. In this regard, multiconnectivity is paramount in ensuring highreliable communication. Nevertheless several fundamental questions emerge, such as what is the optimal number of links needed to ensure a given reliability target? issues of correlated links versus independent links; how to deal with synchronization, nonreciprocity issues, and other imperfections?

Multicast: when receivers are interested in the same information (e.g., missioncritical traffic safety or a common fieldofview in VR), multicast is more reliable than using unicast. However reliability can be sensitive to the coverage range, what MCS is used within the multicast group and who determines the MCS,? the usefulness of multicast will also depend on whether transmission is longrange or shortrange as the performance is inherently limited by cell edge users.

Data replication (contents and computations): needed when coordination among nodes is not possible, when a lowrate backhaul is needed for coordination or due to lack of CSI. This comes at the expense of lower capacity. One solution could be to replicate the same data until receiving an ACK in the case of HARQ.

HARQ + short frame structure, short TTI: improve outage capacity via sufficient retransmissions to achieve high reliability. Here an optimal MCS level selection with the constraint of required reliability and latency (not necessarily optimized for spectral efficiency) is an open research problem.

Control channel design: Unlike LTE where the focus was primarily on protecting data but not the control channel, ensuring high reliability for the control channel is a must. This can be done by sending the delay budget information from the UE to the BS in the control channel, such that on the downlink the BS can select the optimal modulation coding scheme (MCS) based on both channel quality indicator (CQI)^{2}^{2}2Adaptive channel quality indicator (CQI) reporting is key in URLLC whereby more resources can be allocated to enhance the reliability or changing the encoding rate of the CQI. report and remaining latency budget. In addition, replicating the same data until receiving an acknowledgement (ACK) for HARQ can be envisaged at the cost of wasting resources.

Manufacturing diversity via network coding and relaying: when time diversity cannot be relied upon (e.g., extreme latency/reliability constraints) or in the presence of extreme fading events, manufacturing diversity and robustness are key for ensuring URLLC. Hence, exploiting multiuser diversity and network coding using simultaneous relaying to enable a twoway reliable communication without relying on time and frequency diversity is important. Furthermore, network densification not only reduces latency by shrinking the transmission range, but also increases capacity. This comes at the expense of backhaul provisioning whose cost needs to be factored in.

Network slicing: it refers to the process of slicing a physical network into logical subnetworks which are optimized for specific applications for ensuring dedicated resources for verticals (e.g. vehicle to everything (V2X), VR). Slicing is set to also play a pivotal role for heterogeneous applications pertaining to different requirements.

Proactive packet drop: when channel is in a deep fade, packets that cannot be transmitted even with the maximal transmit power can be discarded proactively at the transmitter. Similarly, packet drop can arise at the receiver when the maximum number of retransmissions is reached. This is different than eMBB scenarios assuming infinite queue buffers. In this case either spatial diversity should be used or resources need to be increased.

Spacetime block codes: Orthogonal spacetime block coding has been a very successful transmit diversity technique because it achieves full diversity without CSI at the transmitter and need for joint decoding of multiple symbols. Typically, it is characterized by the number of independent symbols transmitted over time slots; the code rate is . In the presence of channel imperfection, orthogonal spacetime block coding can outperform other diversityseeking approaches such as the maximum ratio transmission.
4G  5G  
Metadata  important  crucial 
Packet size  Long (MBB)  Short (URLLC) Long (eMBB) 
Design  Throughputcentric, Average delay good enough  Latency and reliability centric / tails MATTER 
Reliability  95 or less  
Rate  Shannonian (long packets)  Rate loss due to short packets 
Delay violation  Exponential decay using effective bandwidth  Faster decay than exponential 
Latency  15 ms RTT based on 1 ms subframe  1 ms and less, shorter TTI, HARQ RTT 
Queue size  unbounded  bounded 
Frequency bands  sub6GHz  Above and sub6GHz (URLLC at sub6GHz) 
Scale  A few users/devices  billion devices 
Iv Fundamental Tradeoffs in URLLC
URLLC features several system design tradeoffs which deserve a study on their own. In what follows, we zoom in on some of these tradeoffs:
Finite vs. large blocklength: In lowlatency applications with small blocklengths, there is always a probability that transmissions fail due to noise, deep fading, collision, interference, etc. In this case, the maximum coding rate is lower than the Shannon rate, when transmitting information bits using coded packets spanning channel uses. Furthermore, for high reliability, data must be encoded at a rate which is significantly lower than the Shannon capacity^{3}^{3}3If the blocklength is large, no errors occur and the achievable rate is equal to the wellknown Shannon capacity, for AWGN channels.. A number of works have shown that the Shannon capacity model significantly overestimates the delay performance for such applications, which would lead to insufficient resource allocations. Despite a huge interest in the field [6, 7], a solid theoretical framework for modeling the performance of such systems due to short time spans, finite blocklength and interference is lacking.
Spectral efficiency vs. latency: achieving low latency incurs a spectral efficiency penalty (due to HARQ and short TTI). A system characterization of spectral efficiency versus latency in a multiple access and broadcast system taking into account: (a) bursty packet arrivals, (b) mixture of low latency and delay tolerant traffic, and (c) channel fading and multipath is missing. Low latency transmission on the uplink can be achieved with single shot slotted Aloha type transmission strategy where the device sends the data immediately without incurring the delay associated with making a request and receiving a scheduling grant.
Device energy consumption vs. latency: A fundamental tradeoff that needs to be characterized is the relationship between device energy consumption and latency. In wireless communications, devices need to be in sleep or deep sleep mode when they are not transmitting or receiving to extend battery life. Since applications from the network may send packets to the device, the device needs to wake up periodically to check if packets are waiting. The frequency with which the device checks for incoming packets determines latency of the packets as well as energy consumption. The more frequent the lower the latency but higher the energy consumption.
Energy expenditures vs. reliability: higher reliability requires having several lowpower transmissions instead of one highreliable and highpower transmission as shown in [8] and citesumudumean field for ultradense scenarios. However, this depends on the diversity order that can be achieved, and whether independent or correlated fading is considered.
Reliability vs. latency and rate: generally speaking, higher reliability requires higher latencies due to retransmissions, but there could also be cases where both are optimized. In terms of data rates, it was shown in [9] that guaranteeing higher rates incur lower reliability and viceversa.
SNR vs. diversity: how does the SNR requirement decrease as a function of network nodes and diversity order? Does the use of higher frequency bands help provide more/less diversity? How much SNR is needed to compensate for timevarying channels, bad fading events, etc. Furthermore, how does reliability scale with the number of links/nodes?
Short/long TTI vs. control overhead: TTI duration should be adjusted according to userspecific radio channel conditions and QoS requirements to compensate for the control overhead. As shown in [10], as the load increases the system must gradually increase the TTI size (and consequently the spectral efficiency) to cope with nonnegligible queuing delay, particularly for the tail of the latency distribution. Hence using different TTI sizes to achieve low latency, depending on the offered load and the percentile of interest is needed.
Open vs. closed loop: For closed loop if more resources are used for channel training and estimation, more accurate CSI is obtained albeit low data resources. For open loop a simple broadcast to all nodes may be sufficient but requires more DL resources. In this case building diversity by means of relay nodes or distributed antennas is recommended. This problem needs to be revisited in light of short packet transmissions.
User density vs. dimensions (antennas, bandwidth, blocklength): Classical information theory rooted in infinite coding blocklength assumes a fixed (usually small) number of users, where fundamental limits as the coding blocklength are studied. In the largesystem analysis of multiuser systems, is taken before the number of users . In massive MTC, a massive number of devices with sporadic traffic need to share the spectrum in a given area which means . In this case, allowing while fixing may be inaccurate and provides little insight. This warrants a rethinking of the assumption of fixed population of full buffer users. A step towards this vision is manyuser information theory where the number of users increases without bound with the blocklength has been proposed [11].
V Stateoftheart and gist of recent work
Ensuring lowlatency and ultrareliable communication for future wireless networks is of capital importance. To date, no work has been done on combining latency and reliability into a theoretical framework, although the groundwork has been laid by Polyanskiy’s development of bounds on block error rates for finite blocklength codes [6, 12]. However, queuing effects and networking issues were overlooked. Besides that, no wireless communication systems have been proposed for systems with latency constraints on the order of milliseconds with hundreds to thousands of nodes and with system reliability requirements of to .
Va Latency
At the physical layer level, lowlatency communication has been studied in terms of throughputdelay tradeoffs. Other theoretical investigations include delaylimited link capacity [13] and the use of network effective capacity [14]. While interesting these works focus on minimizing the average latency instead of the worstcase latency. At the network level, the literature on queuebased resource allocation is rich in which tools from Lyapunov optimization, based on myopic queuelength based optimization, are stateoftheart. However, while stability is an important aspect of queuing networks, finegrained metrics like the delay distribution and probabilistic bounds (i.e., tails) cannot be addressed. Indeed a longstanding challenge is to understand the nonasymptotic tradeoffs between delays, throughput and reliability in wireless networks including both coding delays and queuing delays. Towards this vision, the works of AlZubaidy et al. [15] constitute a very good starting point. Other recent works for latency reduction include edge caching [16], the use of short TTI, grantfree nonorthogonal multiple access [10, 17], mobile edge computing, etc.
VB Reliability
Reliable communication has been a fundamental problem in information theory since Shannon’s seminal paper showing that it is possible to communicate with vanishing probability of error at nonzero rates. Several decades after saw the advent of many error control coding schemes for pointtopoint communication (Turbo, LDPC and Polar codes). In wireless fading channels, diversity schemes were developed to deal with the deep fades stemming from multipath effects. For coding delays, error exponents (reliability functions) characterize the exponential rates at which error probabilities decay as coding blocklengths become large. However, this approach does not capture subexponential terms needed to characterize the low delay performance (i.e. the tails). Recent works on finiteblock length analysis and channel dispersion [6, 12] help in this regard but do not address multiuser wireless networks nor interferencelimited settings. At the network level, reliability has been studied to complement the techniques used at the physical layer, including ARQ/HARQ at the medium access level. In these works, reliability is usually increased at the cost of latency due to the use of longer blocklengths or through the use of retransmissions.
Just recently, packet duplication was proposed to achieve high reliability in [18, 7], high availability (in an interferencefree scenario) using multiconnectivity was studied in [7063630] and stochastic network calculus was applied in a single user the multiple input single output (MISO) setting in [19]. Reliable V2V communication and mobile edge computing with URLLC guarantees were studied in [20, 21]. From an ultrareliable communication (URC) perspective, a maximum average rate was derived in [9] guaranteeing a signaltointerference ratio coverage. Finally, a recent (highlevel) URLLC survey can be found in [22] highlighting the building principles of URLLC.
VC Summary
Most of the current state of the art has made a significant contribution towards understanding the ergodic capacity and the average queuing performance of wireless networks focusing on large blocklengths. However, these lines of work fall short of providing crisp insights for reliability and latency issues and understanding their nonasymptotic tradeoffs. Furthermore, current radio access networks are designed with the aim of maximizing throughput while considering a few active users. Undoubtedly, the development of a principled framework laying down the fundamentals of URLLC at the network level is sorely lacking.
Vi Tools and Methodologies for URLLC
As alluded to earlier, URLLC mandates a departure from expected utilitybased approaches relying on average quantities. Instead, a holistic framework which takes into account endtoend delay, reliability, packet size, network architecture/topology, scalability and decisionmaking under uncertainty is lacking. In addition, a myriad of fundamental system design and algorithm principles central to URLLC are at stake. Next, followingup on the breakdown in Figure 1, we identify a (nonexhaustive) set of tools and methodologies which serve this purpose.
Via Risk
ViA1 Risksensitive learning and control
The notion of “risk” is defined as the chance of a huge loss occurring with very low probability. In this case instead of maximizing the expected payoff (or utility), the goal is to mitigate the risk of the huge loss. While reinforcement learning aims at maximizing the expected utility of an agent (i.e., a transmitting node), risksensitive learning is based on the fact that the utility is modified so as to incorporate the risk (e.g., variance, skewness, and higher order statistics). This is done by exponentiating the agent’s cost function before taking the expectation, yielding higher order moments. More concretely, the utility function of an agent is given by:
(1) 
where is the risksensitivity index and is the agent’s transmission strategy. By doing a Taylor approximation around we get:
(2) 
Moreover,
(3) 
In risksensitive reinforcement learning, every agent needs to first estimate its own utility function over time based on a (possibly delayed or imperfect) feedback before updating its transmission probability distribution . The utility estimation of agent when choosing strategy is typically given by:
(4) 
where is a learning parameter. The application of risksensitive learning in the context of millimeterwave communication is given in Section VIII.A.
ViA2 Mathematical finance and portfolio optimization
Financial engineering and electrical engineering are seemingly different areas that share strong underlying connections. Both areas rely on statistical analysis and modeling of systems and the underlaying time series. Inspired from the notion of risk in mathematical finance, we examine various risk measures, such as the valueatrisk (VaR), conditional VaR (CVaR), entropic VaR (EVaR), and meanvariance.

ValueatRisk (): Initially proposed by J.P. Morgan, was developed in response to financial disasters of the 1990s and played a vital role in market risk management. By definition is the worst loss over a target horizon with a given level of confidence such that for :
(5) which can also be expressed as: .

Conditional VaR: CVaR measures the expected loss in the right tail given a particular threshold has been crossed. The CVaR is defined as the conditional mean value of a random variable exceeding a particular percentile. This precisely measures the risky realizations, as opposed to the variance that simply measures how spread the distribution is. Moreover CVaR overcomes the caveat of VaR due to the lack of control of the losses incurred beyond the threshold. Formally speaking, it holds that:
(6) 
Entropic VaR (EVaR): is the tightest upper bound one can find using the Chernoff inequality for the , where for all :
(7) is the moment generating function (MGF) of random variable .
(8) By solving the equation with respect to for , we get
(9) is an upper bound for the CVaR and its dual representation is related to the KullbackLeibler divergence [23]. Moreover, we have that:
(10) Remark: Interestingly, we note that VaR and CVaR are related to the PickandsBalkemade Haan theorem of EVT, i.e., Theorem 2. To illustrate that, we denote . Since calculates the mean of conditioned on , we have that:
(11) where . Letting and as per Theorem 2, can be approximated by a generalized Pareto distribution random variable whose mean is equal to .

Markowitz’s meanvariance (MV): MV is one of the most popular risk models in modern finance (also referred to as Markowitz’s riskreturn model), in which the value of an investment is modeled as a tradeoff between expected payoff (mean return) and variability of the payoff (risk). In a learning context, this entails learning the variance of the payoff from the feedback as follows. First, the variance estimator of agent at time is given by:
(12) after which the variance is estimated as:
(13)
where is a learning parameter. Once the variance is estimated, principles of reinforcement learning can be readily applied [24].
ViB Tail
ViB1 Extreme value theory (EVT)
Latency and reliability are fundamentally about “taming the tails” and going beyond “centrallimit theorems”. In this regard, EVT provides a powerful and robust framework to fully characterize the probability distributions of extreme events and extreme tails of distributions. EVT has been developed during the twentieth century and is now a wellestablished tool to study extreme deviations from the average of a measured phenomenon. EVT has found many applications in oceanography, hydrology, pollution studies, meteorology, material strength, highway traffic and many others (for a comprehensive survey on EVT, see [25]). EVT is built around the two following Theorems:
Theorem 1 (FisherTippettGnedenko theorem for block maxima [25]).
Given independent samples, , from the random variable , define . As , we can approximate the cumulative distribution function (CDF) of as
(14) 
where , defined on , is the generalized extreme value (GEV) distribution characterized by the location parameter , the scale parameter , and the shape parameter .
Theorem 2 (PickandsBalkemade Haan theorem for exceedances over thresholds [25]).
Consider the distribution of X conditionally on exceeding some high threshold . As the threshold closely approaches , i.e., , the conditional CDF of the excess value is
(15) 
where , defined on , is the generalized Pareto distribution (GPD). Moreover, the characteristics of the GPD depend on the scale parameter and the shape parameter . The location and scale parameters in (14) and (15) are related as per while the shape parameters in both theorems are identical.
While Theorem 1 focuses on the maximal value of a sequence of variables, Theorem 2 aims at the values of a sequence above a given threshold. Both theorems asymptotically characterize the statistics of extreme events and provide a cleanslate approach for the analysis of ultrareliable communication, i.e., failures with extreme low probabilities. A direct application of EVT to mobile edge computing scenarios is found in Section VII.D.
ViB2 Effective bandwidth
Effective bandwidth is a largedeviation type approximation defined as the minimal constant service rate needed to serve a random arrival under a queuing delay requirement [14]. Let and be the number of arrivals at time and the number of users in the queue at time , respectively. Assume the queue size is infinite and the server can serve users per unit of time. is referred to as server capacity at time and the queue is governed by the following equation:
(16) 
Moreover, let:
(17) 
for all , and is differentiable. Let be the Legendre transform of , i.e.,
(18) 
Likewise let be the Legendre transform of . We seek the decay rate of the tail distribution of the stationary queue length. This is given in [14] which states that if there exists a unique such that
(19) 
then it holds that:
(20) 
In particular, for fixed capacity for all , we have that:
(21) 
is called the effective bandwidth of the arrival process subject to the condition that the tail distribution of the queue length has the decay rate .
Remark: since the distribution of queuing delay is obtained based on large deviation principles, the effective bandwidth can be used for constant arrival rates, when the delay bound is large and the delay violation probability is small. This raises the question on the usefulness and correctness of using the effective bandwidth in solving problems dealing with finite packet/queue lengths and very low latencies.
ViB3 Stochastic network calculus (SNC)
SNC considers queuing systems and networks of systems with stochastic arrival, departure, and service processes, where the bivariate functions , and for any denote the cumulative arrivals, departures and service of the system, respectively, in the interval . The analysis of queuing systems is done through simple linear inputoutput relations. In the bit domain it is based on a dioid algebra where the standard addition is replaced by the minimum (or infimum) and the standard multiplication replaced by addition. Similar to the convolution and deconvolution in standard algebra, there are definitions for convolution and deconvolution operators in the algebra and the convolution and deconvolution operators in algebra are often used for performance evaluation. Finally, observing that the bit and SNR domains are linked by the exponential function, arrival and departure processes are transferred from the bit to the SNR domain. Then backlog and delay bounds are derived in the transfer domain using the algebra before going back to the bit domain to obtain the desired performance bounds.
Define the cumulative arrival, service and departure processes as:
(22) 
The backlog at time t is given by . Moreover, the delay at time , i.e. the number of slots it takes for an information bit arriving at time t to be received at the destination, is and the delay violation probability is given by
(23) 
SNC allows to obtain bounds on the delay violation probability based on simple statistical characterizations of the arrival and service processes in terms of their Mellin transforms. First by converting the cumulative processes in the bit domain through the exponential function, the corresponding processes in the SNR domain are: . From these definitions, an upper bound on the delay violation probability can be computed by means of the Mellin transforms of and :
(24) 
where is the socalled steadystate kernel, defined as
(25) 
and denotes the Mellin^{4}^{4}4By setting , we obtain the effective bandwidth and MGFbased network calculus. transform of a nonnegative random variable , for any .
Analogous to [15] we consider bounded arrivals where the logMGF of the cumulative arrivals in the bit domain is bounded by
(26) 
This characterization can be viewed as a probabilistic extension of a traffic flow that is deterministically regulated by a token bucket with rate and burst size . To simplify the notation, we restrict the following analysis to values that are independent of , which is true for constant arrivals. Subsequently, the Mellin transform of the SNRdomain arrival process can be upperbounded by:
(27) 
Assuming the cumulative arrival process in SNR domain to have stationary and independent increments, the steadystate kernel for a fading wireless channel is given by:
(7) 
for any , under the stability condition The delay bound (24) thus reduces to
(28) 
ViB4 Meta distribution
The meta distribution is a finegrained key performance metric of wireless systems [26] which provides a mathematical foundation for questions of network densification under strict reliability constraints. As such, the meta distribution is a much sharper and refined metric than the standard success probability, which is easily obtained as the average over the meta distribution. By definition, the meta distribution is the complementary cumulative distribution function (CCDF) of the random variable
(29) 
which is the CCDF of the conditional signaltointerference ratio (SIR) of the typical user () given the points processes and conditioned on the desired transmitter to be active. The meta distribution is formally given by [27]:
(30) 
is the Palm measure. Interestingly, the moments reveal interesting properties of the meta distribution, in which
(31) 
is the standard success probability and variance, respectively.
Since all point processes in the model are ergodic, the meta distribution can be interpreted as the fraction of the active links whose conditional success probabilities are greater than . A simple approach to calculate the meta distribution is to approximate it with the beta distribution, which requires only the first and second moments. Recent applications of the Meta distribution can be found in [9] for industrial automation, and [28] in the context of V2V communication.
ViC Scale
ViC1 Statistical physics
Current wireless systems can support tens to hundreds of nodes with latency constraints on the order of seconds and with moderate reliability. Nonetheless, these do not scale well to systems with thousands to millions of nodes as envisaged in massive MTC or ultradense networks. Solving resource allocation problems in largescale network deployments is intractable and requires cumbersome MonteCarlo simulations lacking fundamental insight. Here, cuttingedge methods from statistical physics such as the replica method and cavity methods, provide fundamental insights and guidelines for the design of massive and ultrareliable networks.
Interactions between particles (e.g. atoms, gases or neurons) is a common theme in the statistical physics literature [29, 30, 31]. Here, instead of analyzing the microscopic network state, one is concerned with the macroscopic state requiring only a few parameters. By invoking this analogy and modeling network elements as particles, a network cost function which captures the interactions between network elements is referred to as the Hamiltonian over the state space and .
One of the underlying principles of statistical physics is to find the lowest energy of a system is carried out with the aid of the partition sum , summing over all the states of the Boltzmann factors at a given fictitious temperature . By adopting the concept of quenched disorder^{5}^{5}5 The quenched disorder of a system exists when the randomness of the system characteristics is time invariant. , the network can be replicated to solve the interactions of infinitely large number of network elements, in which the ground state over is found via the replica method. Here, is the optimal network cost with the exact knowledge of the randomness (e.g. channel and queue states). The replica method in statistical mechanics refers to the idea of computing moments of vanishing order, in which the word “replica" comes from the presence of copies of the vector of configurations in the calculation of the partition function. The quenched average free energy , which is calculated by employing the replica trick on the partition sum ( times replicating ) is given by,
(32) 
where is the replicated free energy. is the quenched average of the th replica of the partition sum. Deriving a closed form expression for the replicated free energy is the key to analyze the performance of dense network deployments.
ViC2 Mean field game theory
When a large number of wireless nodes compete over limited resources (as in massive MTC or UDN), the framework of meanfield games (MFGs) is instrumental in studying multiagent resource allocation problems without resorting to timeconsuming MonteCarlo simulations [32, 33, KimPBKD17]. In a game with very large number of agents, the impact of every agent on the other agents’ utility functions is infinitesimal and the equilibrium/game is dominated by a nontrivial proportion of the population (called mean field). The game can therefore be analyzed at a macrolevel using mean field theory and fundamental limits of the network can be unraveled. Applications of MFGs are of utmost importance, especially when scalability and complexity are crucial, such as when optimizing autonomous vehicles’ trajectories or UAV platooning, distributed machine learning and many others.
Consider a state distribution of set of players where represents the fraction of players at each state in state space with being the Dirac delta function. In the limit of , MF approach seeks an equilibrium with,
(33) 
which is known as the MF distribution. Assuming a homogeneous control policy over all players, i.e. for all players , their interactions can be represented as a game consisting of a set of generic players with the state and action at time . Here, the goal of a generic player is to maximize its utility over the actions and time period under the dynamics of states consist of both time dependent and random components. Remarkably, the presence of a very large number of players leads to a continuum allowing to obtain a solution for the above problem, a meanfield equilibrium, using only two coupled partial differential equations named HamiltonJacobiBellman (HJB) and FokkerPlanckKolmogorov (FPK) equations,
(34) 
respectively. Furthermore, the optimal strategy is given by:
(35) 
which yields the optimal control of a generic player.
Vii Applications
To illustrate the usefulness and effectiveness of the URLLC methodologies, we focus our attention on several use cases pertaining to different verticals and their specific requirements. While each of these use cases has distinct features, there are certain fundamental needs and principles that are common to all these applications, all stemming from the stringent requirements of ultralow latency and high reliability.
Viia Ultrareliable millimeterwave communication
Owing to the vast chunks of spectrum in highfrequency bands, millimeterwave communication is a key enabler for G. However, operating at these frequencies suffers from high propagation loss and link variability to blockage. In contrast to the classical network design based on average metrics (e.g., expected rate), we propose a risksensitive reinforcement learningbased framework to jointly optimize the beamwidth and transmit power of small cells (SCs) in a distributed manner, while taking into account the sensitivity of mmWave links due to blockage. To do that, every SC first estimates its own utility function based on user feedback and then updates its probability distribution over time for every selected strategy (i.e, transmit power and beamwidth) [34]. Specifically, each SC adjusts its beamwidth from the range radian with a step size of radian. The transmit power level set of each SC is dBm. The number of transmit antennas and receive antennas at the SC and UE are set to and , respectively. The blockage is modeled as a distancedependent probability state where the channel is either lineofsight (LOS) or nonLOS for urban environments at GHz with a GHz system bandwidth. Numerical results are obtained via MonteCarlo simulations over several random topologies. Furthermore, we compare the proposed risksensitive learning (RSL) scheme with two baselines: (i) Classical learning (CSL) which refers to the learning framework in which the utility function only considers the mean value of the utility, and (ii) Baseline 1 (BL1) where the SC selects the beamwidth with maximum transmit power.
In Fig. 4, we plot the tail distribution, i.e., complementary cumulative distribution function (CCDF) of the achievable rate at GHz when the number of SCs is per . The CCDF captures the reliability defined as the probability that the achievable user rate is higher than a predefined target rate , i.e, Pr. It is observed that the RSL scheme achieves a probability, Pr, of more than , whereas the baselines CSL and BL1 obtain less than and 60%, respectively. However, at lower rates (less than 2 Gbps) or very high rate ( Gbps) shown by the crosspoint, the RSL obtains a lower probability as compared to the baselines. This shows that the proposed solution provides a user rate which is more concentrated around its median to provide uniform service for all users. This can be seen from the user rate distribution in which RSL has a small variance of 0.5085, while the CSL and BL1 have a higher variance of 2.8678 and 2.8402, respectively.
Fig. 4 plots the impact of network density on the reliability, defined as the fraction of UEs who achieve a given target rate , i.e., . Here, the number of SCs varies from to per . It is shown that for given target rates of , , and Gbps, RSL guarantees higher reliability as compared to the baselines. Moreover, the higher the target rate, the bigger the performance gap between the proposed algorithm and the baselines. In addition, a linear increase in network density is shown to decrease reliability when the density increases from to , and the fraction of users that achieve Gbps of the RSL, CSL, and BL1 are reduced by , , and , respectively.
ViiB Virtual reality (VR)
VR is a use case where URLLC plays an important role, due the fact that the human eye needs to experience accurate and smooth movements with low ( ms) motiontophoton (MTP) latency to avoid motion sickness. Here, multiple players coexist in a gaming arcade and engage in a VR interactive gaming experience. VR headmounted displays (HMDs) are connected via mmwave wireless connections to multiple servers operating in the same mmwave frequency band and are equipped with edge computing servers and storage units. To minimize the VR service latency, players offload their computing tasks which consist of rendering highdefinition video frames to the edge servers over mmWave links. First, players send their tracking data, consisting of their poses (location and rotation coordinates) and game play data, in the uplink to an edge server. The edge server renders the corresponding player’s frame and transmits it in the downlink. Since edge servers are typically equipped with high computation power graphical processing units (GPUs), computing latency is minimized as compared to local computing in the player’s HMD. In addition to minimizing computing latency, reliable and low latency communication is needed to minimize the overtheair communication latency.
In this regard, we propose a proactive computing and multiconnectivity (MC) solution, which is motivated by recent findings on users’ predictions with high accuracy for an upcoming prediction window of hundreds of milliseconds [35]. Here, we investigate the effect of this knowledge to significantly reduce the computing latency via proactive computing, whereby servers proactively render the upcoming HD video frames which are stored at the edge server prior to users’s requests.
Ensuring a reliable link in mmwaveenabled VR environment is a daunting task since the mmWave signal experiences high level of variability and blockage. Therefore, we investigate MC as an enabler for reliable communication, in which a gaming arcade with x game pods, served by multiple mmwave access points connected to edge servers is assumed. We model the user association to edge servers as a dynamic matching problem to minimize service latency such that users with a link quality below a predefined threshold are served via MC.
In Fig. 6, we plot the VR service reliability of the proactive solution with MC and a baseline scheme without proactive computing nor MC. In this context, service reliability is defined as the probability of experiencing a transmission delay less than a threshold value, set here as ms. Fig. 6 shows that for a given server density reliability decreases (as the rate of violating the maximum delay threshold increases) with the number of players. However, increasing the number of servers improves the network reliability, as the likelihood of finding a server with good signal quality increases. Moreover, it is shown that MC is instrumental in boosting the service reliability through overcoming mmwave signal fluctuation, and minimizing the worst service delay a user can get. Moreover, Fig. 6 plots the user reliability as a function of server density. User reliability is expressed as the percentage of users who achieve the reliability target. It can be seen that reliability by means of both proactivity and MC ensures all users are within the delay budget even with low number of servers.
ViiC Mobile Edge Computing
We consider a mobile edge computing (MEC) scenario in which MEC servers are deployed at the network edge to provide faster computation capabilities for computing mobile devices’ tasks. Although mobile devices can wirelessly offload their computationintensive tasks to proximal MEC servers, offloading tasks incurs extra latency. Specifically, if the number of taskoffloading users is large, some offloaded tasks need to wait for the available computational resources of the servers. In this case, the waiting time for task computing at the server cannot be ignored and should be taken into account. Since the waiting time is closely coupled with the task queue length, and extreme queue values will severely deteriorate the delay performance, we leverage EVT to investigate the impact of the queue length on the performance of MEC [36]. Firstly, we set a threshold for the queue length and impose a probabilistic constraint on the queue length threshold violation, i.e.,
(36) 
where and are the queue length in time slot and tolerable violation probability. Subsequently, we focus on the excess queue length over the threshold . According to Theorem 2, we know that the statistics of the exceedances over the threshold are characterized by the scale parameter and shape parameter . Thus, we formulate two constraints for the scale and shape parameters as and which can be further cast as constraints for the mean and second moment of the excess queue value, i.e.,
(37)  
(38) 
Utilizing Lyapunov stochastic optimization, a control algorithm is proposed for task offloading and computation resource allocation while satisfying the constraints on the queue length threshold violation (36) and the statistics of extreme queue length, i.e., (37) and (38) [37].
Fig. 8 shows the tail distribution, i.e., complementary cumulative distribution function (CCDF), of the queue length in which the threshold of the queue length is set as . Given a zeroapproaching threshold violation probability, i.e., , and applying Theorem 2 to the conditional excess queue value , we also plot in Fig. 8 the tail distributions of the conditional excess queue value and the approximated GPD which coincide with each other. Moreover, the shape parameter of the approximated GPD can aid us in estimating the statistics of the maximal queue length as per Theorem 1, so as to proactively tackle the occurrence of extreme events.
In order to show the impact of computation speed and task arrival rates, we vary the bound of the queuing delay and plot in Fig. 8 the delay bound violation probability as a reliability measure. Since the MEC servers provide faster computation capabilities, offloading tasks to the MEC servers further reduces the waiting time for task execution for higher computation tasks. In other words, the MECcentric approach improves the reliability performance for delaysensitive applications with higher computation requirements.
ViiD Multiconnectivity for ultradense networks
We investigate the fundamental problem of BSUE association aiming at improving capacity in the context of ultradense networks via multiconnectivity. By introducing a networkwide cost function that takes into account the tradeoff between the received signaltonoise ratio (SNR) and the network power consumption including both transmit power and the power consumption of BS and UE multiconnectivity, the network is analyzed as a function of channel statistics () as the number of BSs ) and UEs () grows large with a given ratio . Here, the optimization variable represents the BSUE associations. Henceforth, the objective is to determine the optimal networkwide cost averaged over the channel statistics as follows:
(39) 
where , , , and are the number of UEs connected with BS , number of BSs connected with UE , and the power consumption for multiconnectivity at BSs and UEs, respectively.
The complexity of solving (39) grows exponentially with the number of and . Therefore, we resort to the analysis of due to the reduced complexity of the solution steps instead of . Although the former is simpler to solve, it is oblivious to the instantaneous channel states of the network while the latter takes into account the instantaneous channel states. Henceforth, finding an analytical expression for in the dense regime and obtaining important statistics of the operation of the network at this optimal point (such as the average SNR, and the average number of connections per UE and per BS) is the prime motivation. This is achieved by using tools from statistical physics: Hamiltonian, partition sum, and replica method introduced under Section VIC2. Equipped with the above tools, solving (39) is tantamount to solving a set of fixed point equations.
Fig. (a)a validates the effectiveness of the analytical expression using extensive sets of MonteCarlo simulations. It can be clearly noted that the analytical results align very well with the simulations. This showcases the usefulness of this methodology in gaining insights of the performance of ultradense networks without resorting to cumbersome and timeconsuming MonteCarlo simulators.
Fig. (b)b plots the network reliability for different with users. Here, the reliability is measured in terms of the ratio between the number of UEs that achieve an instantaneous SNR () above a given threshold , , to the total number of UEs (). For sake of fairness, the total power consumption of all BSs in the network is fixed and the intent is to provide insights about reliability by asking the question: “is it better to have few powerful transmitters or many lowpower transmitters?”. From the figure, it can be noted that the use of many lowpowered BSs provides higher reliability as compared to a very few powerful BSs. However, while this holds from an average perspective, fluctuations of instantaneous UE SNRs are high for both and scenarios. This means that network topologies with slightly higher than unity are suitable when the variance is soughtafter instead of the average .
Viii Conclusions
Enabling URLLC warrants a major departure from averagebased performance towards a cleanslate design centered on tail, risk and scale. This article has reviewed recent advances in lowlatency and ultrahigh reliability in which key enablers have been closely examined. Several methodologies stemming from adjacent disciplines and tailored to the unique characteristics of URLLC have been described. In addition, via selected use cases we have demonstrated how these tools provide a principled and cleanslate framework for modeling and optimizing URLLCcentric problems.
References
 [1] 3GPP, “Service requirements for the 5g system,,” in 3rd Generation Partnership Project (3GPP), TS 22.261 v16.0.0, 06 2017, 2017.
 [2] C.P. Li, J. Jiang, W. Chen, T. Ji, and J. Smee, “5g ultrareliable and lowlatency systems design,” in 2017 European Conference on Networks and Communications (EuCNC), June 2017, pp. 1–5.
 [3] E. Bastug, M. Bennis, E. Zeydan, M. A. Kader, A. Karatepe, A. S. Er, and M. Debbah, “Big data meets telcos: A proactive caching perspective,” CoRR, vol. abs/1602.06215, 2016. [Online]. Available: http://arxiv.org/abs/1602.06215
 [4] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The role of proactive caching in 5g wireless networks,” Communications Magazine, IEEE, vol. 52, no. 8, pp. 82–89, Aug. 2014.
 [5] J. Konecný, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for ondevice intelligence,” CoRR, vol. abs/1610.02527, 2016. [Online]. Available: http://arxiv.org/abs/1610.02527
 [6] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel coding rate in the finite blocklength regime,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307–2359, May 2010.
 [7] G. Durisi, T. Koch, and P. Popovski, “Toward massive, ultrareliable, and lowlatency wireless communication with short packets,” Proceedings of the IEEE, vol. 104, no. 9, pp. 1711–1726, 2016. [Online]. Available: https://doi.org/10.1109/JPROC.2016.2537298
 [8] M. Simsek, A. Aijaz, M. Dohler, J. Sachs, and G. Fettweis, “5genabled tactile internet,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 3, pp. 460–473, March 2016.
 [9] J. Park and P. Popovski, “Coverage and rate of downlink sequence transmissions with reliability guarantees,” CoRR, vol. abs/1704.05296, 2017. [Online]. Available: http://arxiv.org/abs/1704.05296
 [10] G. Pocovi, K. I. Pedersen, B. Soret, M. Lauridsen, and P. Mogensen, “On the impact of multiuser traffic dynamics on low latency communications,” in 2016 International Symposium on Wireless Communication Systems (ISWCS), Sept 2016, pp. 204–208.
 [11] X. Chen, T. Chen, and D. Guo, “Capacity of gaussian manyaccess channels,” IEEE Trans. Information Theory, vol. 63, no. 6, pp. 3516–3539, 2017. [Online]. Available: https://doi.org/10.1109/TIT.2017.2668391
 [12] W. Yang, G. Caire, G. Durisi, and Y. Polyanskiy, “Optimum power control at finite blocklength,” IEEE Transactions on Information Theory, vol. 61, no. 9, pp. 4598–4615, Sept 2015.
 [13] S. V. Hanly and D. N. C. Tse, “Multiaccess fading channels. ii. delaylimited capacities,” IEEE Transactions on Information Theory, vol. 44, no. 7, pp. 2816–2831, Nov 1998.
 [14] C.S. Chang and T. Zajic, “Effective bandwidths of departure processes from queues with time varying capacities,” in INFOCOM ’95. Fourteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Bringing Information to People. Proceedings. IEEE, Apr 1995, pp. 1001–1009 vol.3.
 [15] H. AlZubaidy, J. Liebeherr, and A. Burchard, “Networklayer performance analysis of multihop fading channels,” IEEE/ACM Transactions on Networking, vol. 24, no. 1, pp. 204–217, Feb 2016.
 [16] E. Bastug, M. Bennis, E. Zeydan, M. A. Kader, I. A. Karatepe, A. S. Er, and M. Debbah, “Big data meets telcos: A proactive caching perspective,” Journal of Communications and Networks, vol. 17, no. 6, pp. 549–557, 2015. [Online]. Available: http://dx.doi.org/10.1109/JCN.2015.000102
 [17] B. Soret, P. Mogensen, K. I. Pedersen, and M. C. AguayoTorres, “Fundamental tradeoffs among reliability, latency and throughput in cellular networks,” in 2014 IEEE Globecom Workshops (GC Wkshps), Dec 2014, pp. 1391–1396.
 [18] P. Popovski, “Ultrareliable communication in 5g wireless systems,” in 1st International Conference on 5G for Ubiquitous Connectivity, 5GU 2014, Levi, Finland, November 2627, 2014, 2014, pp. 146–151. [Online]. Available: https://doi.org/10.4108/icst.5gu.2014.258154
 [19] J. Arnau and M. Kountouris, “Delay performance of MISO wireless communications,” CoRR, vol. abs/1707.08089, 2017. [Online]. Available: http://arxiv.org/abs/1707.08089
 [20] M. I. Ashraf, C.F. Liu, M. Bennis, and W. Saad, “Towards lowlatency and ultrareliable vehicletovehicle communication,” in 2017 European Conference on Networks and Communications, EuCNC 2017, Oulu, Finland, June 1215, 2017, 2017, pp. 1–5. [Online]. Available: https://doi.org/10.1109/EuCNC.2017.7980743
 [21] M. S. ElBamby, M. Bennis, and W. Saad, “Proactive edge computing in latencyconstrained fog networks,” in 2017 European Conference on Networks and Communications, EuCNC 2017, Oulu, Finland, June 1215, 2017, 2017, pp. 1–6.
 [22] P. Popovski, J. J. Nielsen, C. Stefanovic, E. de Carvalho, E. G. Ström, K. F. Trillingsgaard, A. Bana, D. Kim, R. Kotaba, J. Park, and R. B. Sørensen, “Ultrareliable lowlatency communication (URLLC): principles and building blocks,” CoRR, vol. abs/1708.07862, 2017. [Online]. Available: http://arxiv.org/abs/1708.07862
 [23] A. AhmadiJavid, “Entropic valueatrisk: A new coherent risk measure,” Journal of Optimization Theory and Applications, vol. 155, no. 3, pp. 1105–1123, Dec 2012. [Online]. Available: https://doi.org/10.1007/s1095701199682
 [24] M. Bennis, S. Perlaza, Z. Han, and H. Poor, “Selforganization in small cell networks: A reinforcement learning approach,” Wireless Communications, IEEE Transactions on, vol. 12, no. 7, pp. 3202–3212, Jul. 2013.
 [25] S. Coles, An Introduction to Statistical Modeling of Extreme Values. Springer, 2001.
 [26] M. Salehi, A. Mohammadi, and M. Haenggi, “Analysis of d2d underlaid cellular networks: Sir meta distribution and mean local delay,” IEEE Transactions on Communications, vol. 65, no. 7, pp. 2904–2916, July 2017.
 [27] R. K. Ganti and J. G. Andrews, “Correlation of link outages in lowmobility spatial wireless networks,” in 2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers, Nov 2010, pp. 312–316.
 [28] M. Abdulla and H. Wymeersch, “Finegrained reliability for V2V communications around suburban and urban intersections,” CoRR, vol. abs/1706.10011, 2017. [Online]. Available: http://arxiv.org/abs/1706.10011
 [29] J. Hubbard, “Calculation of Partition Functions,” Phys. Rev. Lett., vol. 3, pp. 77–78, Jul 1959.
 [30] T. M. Nieuwenhuizen, “The Marriage Problem and the Fate of Bachelors,” Physica A: Statistical Mechanics and its Applications, vol. 252, no. 1, pp. 178–198, Aug 1998.
 [31] M. G. Dell’Erba, “Statistical Mechanics of a Simplified Bipartite Matching Problem: An Analytical Treatment,” Journal of Statistical Physics, vol. 146, no. 6, pp. 1263–1273, 2012.
 [32] O. Gueant, J.M. Lasry, and P.L. Lions, “Mean Field Games and Applications,” in ParisPrinceton Lectures on Mathematical Finance 2010, ser. Lecture Notes in Mathematics. Springer Berlin Heidelberg, 2011, vol. 2003, pp. 205–266.
 [33] P. E. Caines, “Mean Field Games,” in Encyclopedia of Systems and Control. Springer London, 2014, pp. 1–6.
 [34] M. Bennis, S. M. Perlaza, P. Blasco, Z. Han, and H. V. Poor, “Selforganization in small cell networks: A reinforcement learning approach,” IEEE Transactions on Wireless Communications, vol. 12, no. 7, pp. 3202–3212, 2013.
 [35] F. Qian, L. Ji, B. Han, and V. Gopalakrishnan, “Optimizing 360 video delivery over cellular networks,” in Proc. 5th Workshop on All Things Cellular: Operations, Applications and Challenges, ser. ATC ’16, New York, NY, USA, 2016, pp. 1–6. [Online]. Available: http://doi.acm.org/10.1145/2980055.2980056
 [36] C.F. Liu, M. Bennis, and H. V. Poor, “Latency and reliabilityaware task offloading and resource allocation for mobile edge computing,” in Proc. IEEE Global Commun. Conf. Workshops, Dec. 2017, pp. 1–7.
 [37] C. Liu, M. Bennis, and H. V. Poor, “Latency and reliabilityaware task offloading and resource allocation for mobile edge computing,” CoRR, vol. abs/1710.00590, 2017. [Online]. Available: http://arxiv.org/abs/1710.00590