139x Filetype PDF File size 0.78 MB Source: www.diva-portal.org
Bayesian Cooperative Localization Using Received Signal Strength With Unknown Path Loss Exponent: Message Passing Approaches Di Jin, Feng Yin, Carsten Fritsche, Fredrik Gustafsson and Abdelhak M. Zoubir The self-archived postprint version of this journal article is available at Linköping University Institutional Repository (DiVA): http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165498 N.B.: When citing this work, cite the original publication. Jin, Di, Yin, F., Fritsche, C., Gustafsson, F., Zoubir, A. M., (2020), Bayesian Cooperative Localization Using Received Signal Strength With Unknown Path Loss Exponent: Message Passing Approaches, IEEE Transactions on Signal Processing, 68, 1120-1135. https://doi.org/10.1109/TSP.2020.2969048 Original publication available at: https://doi.org/10.1109/TSP.2020.2969048 Copyright: Institute of Electrical and Electronics Engineers (IEEE) http://www.ieee.org/index.html ©2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. 1 Bayesian Cooperative Localization Using Received Signal Strength With Unknown Path Loss Exponent: Message Passing Approaches Di Jin, Feng Yin, Carsten Fritsche, Fredrik Gustafsson, and Abdelhak M. Zoubir Abstract—We propose a Bayesian framework for the received- approach [7], convex-optimization-based algorithms [13]–[17], signal-strength-based cooperative localization problem with un- multidimensional scaling (MDS) [18], [19] and expectation- known path loss exponent. Our purpose is to infer the marginal conditional maximization (ECM) [20]. On the other hand, the posterior of each unknown parameter: the position or the path Bayesian approaches treat the positions as random variables loss exponent. This probabilistic inference problem is solved using and formulate cooperative localization as a probabilistic in- message passing algorithms that update messages and beliefs iteratively. For numerical tractability, we combine the variable ference problem. These approaches take advantage of prior discretization and Monte-Carlo-based numerical approximation information of parameters. Most importantly, the posterior schemes. To further improve computational efficiency, we develop distribution of each position is inferred, which contains much an auxiliary importance sampler that updates the beliefs with more information than just one deterministic point estimate, the help of an auxiliary variable. An important ingredient of the e.g., the modality of the position and its associated uncertainty. proposed auxiliary importance sampler is the ability to sample from a normalized likelihood function. To this end, we develop Representative probabilistic approaches include the nonpara- a stochastic sampling strategy that mathematically interprets metric belief propagation (NBP) [21], [22], sum-product algo- and corrects an existing heuristic strategy. The proposed mes- rithm over a network (SPAWN) [8] and their low-complexity sage passing algorithms are analyzed systematically in terms variants [23]–[26]. of computational complexity, demonstrating the computational Among different position-related signal metrics, received efficiency of the proposed auxiliary importance sampler. Various simulations are conducted to validate the overall good perfor- signal strength (RSS) has gained much attention due to its mance of the proposed algorithms. ubiquitousness in wireless radio frequency signals [27]. For Index Terms—Belief propagation, cooperative localization, instance, an RSS indicator (RSSI) has been encoded in the message passing, received signal strength, stochastic sampling. IEEE 802.15.4 standards [28]. Despite its comparatively high uncertainty about position, RSS measurement can be exploited to enable low-cost, simple and opportunistic localization sys- I. INTRODUCTION tems, without the need of additional hardware. However, many OCATION awareness plays a key enabling role in the existing works on RSS-based localization, such as [14], [29], Lexpending location-based services and has a wide range are based on the assumption that the classical path loss of emerging applications, such as medical services [1], in- propagation model is perfectly known. This oversimplified telligent transportation systems [2], Internet of Things [2], assumption is impractical for two reasons. Firstly, the estima- autonomous vehicles [3], crowd sensing [4], [5] and smart tion of these model parameters usually relies on a laborious environment [6]. Cooperative localization [7]–[10], where all calibration phase, where a large amount of training data needs internode measurements can be exploited, has many appeal- to be collected and processed. However, such a calibration ing advantages, among others, expanding the capabilities of step is very time consuming and even impossible in many locating positions without ambiguity and improving the per- scenarios, such as monitoring and surveillance applications formance on estimation accuracy. The benefits of cooperation in hostile or inaccessible environments [30]. Secondly, these among nodes have been theoretically demonstrated in [11], model parameters, particularly the path loss exponent (PLE), [12]. The existing cooperative localization algorithms can be are time varying, due to the changing environment, e.g., categorized into deterministic and probabilistic approaches, weather conditions or human behaviors [31], [32]. Without a depending on whether the localization problem is formulated frequent recalibration, the resulting mismatch will significantly in a probabilistic manner, In the first category, the positions deteriorate the localization performance. To overcome this (and model parameters if any) are assumed to be deterministic problem, these model parameters should be assumed unknown but unknown, and only a deterministic point estimate is and jointly estimated with the positions. provided for each unknown parameter. Classical approaches, In this paper, we focus on the case with an unknown PLE, to mention some, include the maximum likelihood (ML) because a slight deviation of PLE may severely deteriorate the localization performance, as theoretically and algorithmically D. Jin and A. M. Zoubir are with the Signal Processing Group at Technische demonstrated in [33], [34]. For the case of noncooperative ¨ Universitat Darmstadt, Darmstadt, Germany. F. Yin (corresponding author) is localization, there exist several works dealing with unknown with the School of Science and Engineering at Chinese University of Hong PLE. In [30], the target position and the PLE are estimated Kong,Shenzhen,China.C.FritscheandF.GustafssonarewiththeDepartment ¨ ¨ of Electrical Engineering at Linkoping University, Linkoping, Sweden. jointly by solving an ML problem using the Levenberg- 2 Marquardt algorithm. In [34], [35], the ML problem is first in Section VII. Finally, Section VIII concludes the paper. relaxed by linearizing the problem and then simplified by Notation: Throughout this paper, boldface lowercase letter x replacing the position variable with a function of the PLE is reserved for vectors. k·k stands for the Euclidean norm, and variable. By doing so, the cost function depends only on | · | denotes the cardinality of a set. N(µ,σ2) denotes a Gaus- the one-dimensional (1D) PLE variable, and the resulting sian distribution with mean µ and variance σ2; U [a,b) denotes optimization problem can be readily solved using grid search. a uniform distribution with boundaries a and b; logN(µ,σ2) In [30], [36], the location is estimated by eliminating the denotes a log-normally distributed random variable x with µ nuisance parameter: the PLE parameter (or several other model and σ2 being the mean and variance of logx. f(·) and p(·) are parameters). The original ML problem in [30] is simplified by reserved for probability density functions (pdf) and probability representing the PLE as a function of the position variable in mass functions, respectively. f denotes the pdf of a Gaussian N [33]. In [37], along with several model parameters, the location distribution. Γ\i represents a set consisting all elements in the l L is estimated based on the expectation and maximization crite- set Γ excluding the element i. {x } symbolizes a collection l=1 rion. In [38], [39], the location and the PLE are estimated in an of samples x1,...,xL . alternating manner. More precisely, the position is estimated based on an initialized (or estimated) PLE, and afterwards the II. PROBLEM FORMULATION PLE is estimated based on the updated position estimate. This Consider a wireless sensor network (WSN) in 2D space procedure iterates until certain termination condition is met. In with two types of nodes: blindfolded nodes (agents) with the cooperative case, RSS-based localization with an unknown unknown locations and reference nodes (anchors) with known PLE is even more challenging. To the best of our knowledge, T locations. Let x = [x ,y ] denote the location of each i i i only very limited works exist, including [15], [17], [39], where node, where i ∈ Su = {1,...,Nu} for an agent and the alternating strategy is adopted to handle the unknown i ∈ S = {N +1,...,N} for an anchor. The index set of all a u S PLE, like in the noncooperative case. In our view, despite its nodes is denoted by S, and we have S = Su Sa. Two nodes straightforwardness and simplicity, such an alternating strategy i and j are neighbors if they are allowed to communicate with is quite heuristic and lack of theoretical support. each other. We denote the index set of node i’s neighbors by In contrast to previous works, we treat the PLE as a Γ . i random variable and formulate the problem in a Bayesian Using the well-known log-distance path loss propagation framework. The reasons are as follows. First, when the PLEs model, the RSS measurement rij, coming from node i and between different propagation links differ, a random variable received by node j, is given by characterizing the averaging behavior of the collection of all r =A −10αlog (d /d )+v , (1) PLEs is more suitable than just one deterministic PLE value. ij i 10 ij 0 ij Second, characterizing the PLE as a random variable enables where d0 is a predefined reference distance. Ai denotes the us to integrate any prior information, if available, into the reference power in dBm at d0, and is assumed to be known. parameter estimation. Under the Bayesian umbrella, the coop- α denotes the path loss exponent (PLE) that is assumed to erative localization problem with an unknown PLE becomes be an unknown random variable and d ,kx −x k is the ij i j a probabilistic inference problem. In this problem, we derive Euclidean distance. v stands for the log-normal shadowing ij message passing algorithms to infer the marginalized posterior error that is modeled by v ∼ N(0,σ2). We assume mea- ij ij distribution of each unknown parameter: the position or the surements to be symmetric, i.e., r and r . The collection of ij ji PLE. For mathematical tractability, we combine the variable all RSS measurements is denoted by r , {rij : (i,j) ∈ Γ}, discretization and Monte-Carlo-based numerical approxima- where (i,j) represents that nodes i and j are neighbors, and tion mechanisms. In addition, to reduce the computational Γ , {(i,j) : j ∈ Γ and j > i;i ∈ S } denotes the set of all i u complexity, we propose an auxiliary importance sampler for pairs of neighboring nodes. In line with the majority of the belief update that has a complexity order scaling linearly with existing works, we assume that these shadowing measurement the number of samples. Moreover, we develop a novel strategy errors v for all (i,j) ∈ Γ are independent. The distribution ij for sampling from a normalized likelihood function, which of v , denoted by f (v ), is assumed to be known. ij vij ij plays an important role in the auxiliary importance sampler It is important to stress that the propagation environment andmathematically interprets and corrects an existing heuristic implied by the measurement model (1) is homogeneous, i.e., sampling strategy. The proposed sampling strategy will benefit all links share a common PLE α. On the other hand, an many existing works, such as [8], [22], since this task is an inhomogeneous environment is defined for links with different embedded step in many message-passing-based cooperative PLEs. The same definitions can be found in [40] as well. localization algorithms. It is true that each propagation link, e.g., the connection This paper is organized as follows: In Section II, we betweennodeiandnodej,shouldhaveanindividualPLEα . ij formulate the RSS-based cooperative localization problem However, this will lead to an under-determined problem, since with an unknown PLE mathematically. Fundamental concepts besides the unknown locations, there will be one unknown in message passing algorithms are given in Section III. We parameter α associated with each measurement r . As a ij ij discuss how to approximate the messages in Section IV and compromise between model accuracy and feasibility, a ho- demonstrate how to update the beliefs approximately in Sec- mogeneous propagation environment is considered throughout tion V. Some important issues are discussed in Section VI. The this work. It is noticeable that even for a homogeneous prop- proposed algorithms are evaluated using extensive simulations agation environment, the problem of RSS-based cooperative 3 localization with an unknown PLE is readily challenging and fj f f , t ∈ Γ \i i f , s ∈ Γ \j rarely studied. tj j si i From a Bayesian perspective, we treat the PLE α and each m (x ) ij i position xi,i ∈ S, as random variables, whose prior distribu- . . . x f x . tions are denoted by f(α) and f(xi),i ∈ S, respectively. All . j ij i . positions and the PLE variable are assumed to be mutually independent, i.e., f(α,x1,...,xN) = f(α)·f(x1)···f(xN). Fig. 1: An illustration of factor graph and belief propagation. For Our purpose is to infer the marginalized posterior distribution clarity, the overlap between two rectangle blocks are omitted. Here, (marginal posterior) of each unknown parameter, which is f is the short notation for f(x ) and f for the likelihood function f(α|r) or f(x |r), i ∈ S , from the measurements r and the i i ij f(r |x ,x ). i u ij i j prior information about all parameters. III. FUNDAMENTALS ON COOPERATIVE LOCALIZATION (m˜ (xj)) on position xj, taking into account the likelihood VIA MESSAGE PASSING function f(rij xi,xj). Collecting all messages from factors To infer the marginal posterior of the PLE variable α and that are connected to position variable xi, belief at xi, denoted that of each position x ,i ∈ S , we start with the joint poste- by Bn(xi), is obtained as i u Y rior distribution f(x1,...,xN,α|r). Under the assumptions Bn(xi) ∝ f(xi) mn (xi). f →x made in the preceding section, it has the form of ij i j∈Γ i N f(x1,...,xN,α|r)∝ f(α)Y f(xi) Y f(rij|xi,xj,α) . The belief Bn(xi) can be interpreted as collecting the sta- i=1 j∈Γ ,j>i tistical information on xi coming from all its neighbors i n (2) (m (x ), ∀j ∈ Γ ) and its own prior (f(x )). Such f →x , i i i ij i Intuitively, the marginal posterior, e.g., f(xi |r), can be cal- message passing procedure iterates until certain termination culated as follows: condition is fulfilled. Upon convergence, the belief B(xi) is Z Z an approximation of the marginal posterior f(xi r). f(xi|r) = · · · f(x1,...,xN,α|r) dx1:N\idα. However, this is intractable due to the high dimensionality of B. Proposed Message Passing Algorithms the problem. A well-know local message passing algorithm, Despite the fact that several works, e.g., [21], [42], exist for called belief propagation (BP), enables approximate marginal- cooperative localization via BP, it is impossible to apply them ization for loopy graphs in an efficient fashion [41]. directly to our problem. The reason is that unlike a pairwise potential function in the existing works, the likelihood function A. Fundamentals on Belief Propagation f(rij |xi,xj,α) in our problem is of order three. This makes To facilitate the derivation of the proposed message passing the BP algorithm for our problem not straightforward, and, algorithms, we start with the fundamentals on belief prop- hence, we will derive it explicitly in what follows. We first agation. For that, a graphical model, factor graph, will be represent the joint posterior distribution f(x1,...,xN,α|r) introduced first. For simplicity, we ignore the PLE variable α using the factor graph, see Fig. 2. The PLE variable is in this subsection. There are two distinctive types of nodes in connected to all likelihood functions as it is related to all the factor graph, the position variables in circles and the factors measurements. in squares. An example is depicted in Fig. 1. The factors stand The key idea of the BP is to update a set of messages for functions, such as the likelihood functions and the prior iteratively, which contribute to calculating the marginal pos- distribution functions. Two position variables are connected teriors. With a slight notation abuse, we use fij as a short- via a factor if there is a measurement between them. The basic handnotation for the likelihood function f(rij |xi,xj,α) from now on. We denote the message from factor f to variable α operation in BP is the message passing procedure, which can ij by m (α) and that from f to x by m (x ). The be interpreted as exchanging statistical information on adjacent f →α ij i f →x i ij ij i messages m (α) and m (x ) are updated according nodes of the factor graph. According to the message passing f →α f →x i ij ij i rules in BP [41], the message passed from factor fij to variable to the following rules: node xi is obtained as n ZZ Y n−1 Z Y m (α) ∝ f(rij |xi,xj,α) f(xi) m (xi) f →α f →x n n−1 ij si i m (x ) ∝ f(r x ,x )f(x ) m (x ) dx . s∈Γ \j f →x i ij i j j f →x j j i ij i tj j Y t∈Γ \i n−1 j · f(x ) m (x ) dx dx , (3a) | {z } j f →x j i j tj j m˜ (xj) t∈Γ \i ZZ j Here, the superscript n is the iteration index, and Γ \i denotes n Y n−1 j m (x ) ∝ f(r |x ,x ,α) f(x ) m (x ) f →x i ij i j j f →x j the set of all neighbors of node j excluding node i. Roughly ij i tj j t∈Γ \i n Y j speaking, the message m (xi) can be interpreted as the fij→xi n−1 · f(α) m (α) dxj dα. (3b) statistical information on position x coming from its neighbor f →α i uz j. This is obtained by transforming the statistical information (u,z)∈Γ\(i,j)
no reviews yet
Please Login to review.