Statistics

All posts under category "Statistics"

17 posts total
Sorted by date
Optimizing a Supply-Side Platforms Header Bidding Strategy with Thompson Sampling

Optimizing a Supply-Side Platforms Header Bidding Strategy with Thompson Sampling

Over the last decade, digital media (web or app publishers) generalized the use of real time ad auctions to sell their ad spaces. Multiple auction platforms, also called Supply-Side Platforms (SSP), were created. Because of this multiplicity, publishers started to create competition between SSPs. In this setting, there are two successive auctions a second price auction in each SSP and a secondary, first price auction, called header bidding auction, between SSPs.In this paper, we consider an SSP competing with other SSPs for ad spaces. The SSP acts as an intermediary between an advertiser wanting to buy ad spaces and a web publisher wanting to sell its ad spaces, and needs to define a bidding strategy to be able to deliver to the advertisers as many ads as possible while spending as little as possible. The revenue optimization of this SSP can be written as a contextual bandit problem, where the context consists of the information available about the ad opportunity, such as properties of the internet user or of the ad placement.Using classical multi-armed bandit strategies (such as the original versions of UCB and EXP3) is inefficient in this setting and yields a low convergence speed, as the arms are very correlated. In this paper we design and experiment a version of the Thompson Sampling algorithm that easily takes this correlation into account. We combine this bayesian algorithm with a particle filter, which permits to handle non-stationarity by sequentially estimating the distribution of the highest bid to beat in order to win an auction. We apply this methodology on two real auction datasets, and show that it significantly outperforms more classical approaches.The strategy defined in this paper is being developed to be deployed on thousands of publishers worldwide.

paper research
Clock Synchronization Over Networks  Identifiability of the Sawtooth Model

Clock Synchronization Over Networks Identifiability of the Sawtooth Model

In this paper, we analyze the two-node joint clock synchronization and ranging problem. We focus on the case of nodes that employ time-to-digital converters to determine the range between them precisely. This specific design choice leads to a sawtooth model for the captured signal, which has not been studied before from an estimation theoretic standpoint. In the study of this model, we recover the basic conclusion of a well-known article by Freris, Graham, and Kumar in clock synchronization. More importantly, we discover a surprising identifiability result on the sawtooth signal model noise improves the theoretical condition of the estimation of the phase and offset parameters. To complete our study, we provide performance references for joint clock synchronization and ranging using the sawtooth signal model by presenting an exhaustive simulation study on basic estimation strategies under different realistic conditions. With our contributions in this paper, we enable further research in the estimation of sawtooth signal models and pave the path towards their industrial use for clock synchronization and ranging.

paper research
Large-Scale Traffic Signal Control with a New Multi-Agent Reinforcement Learning Approach

Large-Scale Traffic Signal Control with a New Multi-Agent Reinforcement Learning Approach

Finding the optimal signal timing strategy is a difficult task for the problem of large-scale traffic signal control (TSC). Multi-Agent Reinforcement Learning (MARL) is a promising method to solve this problem. However, there is still room for improvement in extending to large-scale problems and modeling the behaviors of other agents for each individual agent. In this paper, a new MARL, called Cooperative double Q-learning (Co-DQL), is proposed, which has several prominent features. It uses a highly scalable independent double Q-learning method based on double estimators and the UCB policy, which can eliminate the over-estimation problem existing in traditional independent Q-learning while ensuring exploration. It uses mean field approximation to model the interaction among agents, thereby making agents learn a better cooperative strategy. In order to improve the stability and robustness of the learning process, we introduce a new reward allocation mechanism and a local state sharing method. In addition, we analyze the convergence properties of the proposed algorithm. Co-DQL is applied on TSC and tested on a multi-traffic signal simulator. According to the results obtained on several traffic scenarios, Co- DQL outperforms several state-of-the-art decentralized MARL algorithms. It can effectively shorten the average waiting time of the vehicles in the whole road system.

paper research
Rank Estimators in High-Dimensional Spaces

Rank Estimators in High-Dimensional Spaces

The family of rank estimators, including Han s maximum rank correlation (Han, 1987) as a notable example, has been widely exploited in studying regression problems. For these estimators, although the linear index is introduced for alleviating the impact of dimensionality, the effect of large dimension on inference is rarely studied. This paper fills this gap via studying the statistical properties of a larger family of M-estimators, whose objective functions are formulated as U-processes and may be discontinuous in increasing dimension set-up where the number of parameters, $p_{n}$, in the model is allowed to increase with the sample size, $n$. First, we find that often in estimation, as $p_{n}/n rightarrow 0$, $(p_{n}/n)^{1/2}$ rate of convergence is obtainable. Second, we establish Bahadur-type bounds and study the validity of normal approximation, which we find often requires a much stronger scaling requirement than $p_{n}^{2}/n rightarrow 0.$ Third, we state conditions under which the numerical derivative estimator of asymptotic covariance matrix is consistent, and show that the step size in implementing the covariance estimator has to be adjusted with respect to $p_{n}$. All theoretical results are further backed up by simulation studies.

paper research
No Image

Distributed Deep Convolutional Neural Networks for the Internet-of-Things

Severe constraints on memory and computation characterizing the Internet-of-Things (IoT) units may prevent the execution of Deep Learning (DL)-based solutions, which typically demand large memory and high processing load. In order to support a real-time execution of the considered DL model at the IoT unit level, DL solutions must be designed having in mind constraints on memory and processing capability exposed by the chosen IoT technology. In this paper, we introduce a design methodology aiming at allocating the execution of Convolutional Neural Networks (CNNs) on a distributed IoT application. Such a methodology is formalized as an optimization problem where the latency between the data-gathering phase and the subsequent decision-making one is minimized, within the given constraints on memory and processing load at the units level. The methodology supports multiple sources of data as well as multiple CNNs in execution on the same IoT system allowing the design of CNN-based applications demanding autonomy, low decision-latency, and high Quality-of-Service.

paper research
An Introduction to Decentralized Stochastic Optimization with Gradient Tracking

An Introduction to Decentralized Stochastic Optimization with Gradient Tracking

Decentralized solutions to finite-sum minimization are of significant importance in many signal processing, control, and machine learning applications. In such settings, the data is distributed over a network of arbitrarily-connected nodes and raw data sharing is prohibitive often due to communication or privacy constraints. In this article, we review decentralized stochastic first-order optimization methods and illustrate some recent improvements based on gradient tracking and variance reduction, focusing particularly on smooth and strongly-convex objective functions. We provide intuitive illustrations of the main technical ideas as well as applications of the algorithms in the context of decentralized training of machine learning models.

paper research
Exact Computation with Infinitely Wide Neural Networks

Exact Computation with Infinitely Wide Neural Networks

How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its width --- namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers --- is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [Jacot et al., 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width. The current paper gives the first efficient exact algorithm for computing the extension of NTK to convolutional neural nets, which we call Convolutional NTK (CNTK), as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for the performance of a pure kernel-based method on CIFAR-10, being $10 %$ higher than the methods reported in [Novak et al., 2019], and only $6 %$ lower than the performance of the corresponding finite deep net architecture (once batch normalization, etc. are turned off). Theoretically, we also give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK.

paper research
From Interpretability to Inference  An Estimation Framework for Universal Approximators

From Interpretability to Inference An Estimation Framework for Universal Approximators

We present a novel framework for estimation and inference with the broad class of universal approximators. Estimation is based on the decomposition of model predictions into Shapley values. Inference relies on analyzing the bias and variance properties of individual Shapley components. We show that Shapley value estimation is asymptotically unbiased, and we introduce Shapley regressions as a tool to uncover the true data generating process from noisy data alone. The well-known case of the linear regression is the special case in our framework if the model is linear in parameters. We present theoretical, numerical, and empirical results for the estimation of heterogeneous treatment effects as our guiding example.

paper research
Communication-Efficient Federated Deep Learning with Asynchronous Model Updates and Temporally Weighted Aggregation

Communication-Efficient Federated Deep Learning with Asynchronous Model Updates and Temporally Weighted Aggregation

Federated learning obtains a central model on the server by aggregating models trained locally on clients. As a result, federated learning does not require clients to upload their data to the server, thereby preserving the data privacy of the clients. One challenge in federated learning is to reduce the client-server communication since the end devices typically have very limited communication bandwidth. This paper presents an enhanced federated learning technique by proposing a synchronous learning strategy on the clients and a temporally weighted aggregation of the local models on the server. In the asynchronous learning strategy, different layers of the deep neural networks are categorized into shallow and deeps layers and the parameters of the deep layers are updated less frequently than those of the shallow layers. Furthermore, a temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model. The proposed algorithm is empirically on two datasets with different deep neural networks. Our results demonstrate that the proposed asynchronous federated deep learning outperforms the baseline algorithm both in terms of communication cost and model accuracy.

paper research
Prediction Intervals for Synthetic Control Methods

Prediction Intervals for Synthetic Control Methods

Uncertainty quantification is a fundamental problem in the analysis and interpretation of synthetic control (SC) methods. We develop conditional prediction intervals in the SC framework, and provide conditions under which these intervals offer finite-sample probability guarantees. Our method allows for covariate adjustment and non-stationary data. The construction begins by noting that the statistical uncertainty of the SC prediction is governed by two distinct sources of randomness one coming from the construction of the (likely misspecified) SC weights in the pre-treatment period, and the other coming from the unobservable stochastic error in the post-treatment period when the treatment effect is analyzed. Accordingly, our proposed prediction intervals are constructed taking into account both sources of randomness. For implementation, we propose a simulation-based approach along with finite-sample-based probability bound arguments, naturally leading to principled sensitivity analysis methods. We illustrate the numerical performance of our methods using empirical applications and a small simulation study. texttt{Python}, texttt{R} and texttt{Stata} software packages implementing our methodology are available.

paper research
Training DNN IoT Applications for Deployment on Analog NVM Crossbars

Training DNN IoT Applications for Deployment on Analog NVM Crossbars

A trend towards energy-efficiency, security and privacy has led to a recent focus on deploying DNNs on microcontrollers. However, limits on compute and memory resources restrict the size and the complexity of the ML models deployable in these systems. Computation-In-Memory architectures based on resistive nonvolatile memory (NVM) technologies hold great promise of satisfying the compute and memory demands of high-performance and low-power, inherent in modern DNNs. Nevertheless, these technologies are still immature and suffer from both the intrinsic analog-domain noise problems and the inability of representing negative weights in the NVM structures, incurring in larger crossbar sizes with concomitant impact on ADCs and DACs. In this paper, we provide a training framework for addressing these challenges and quantitatively evaluate the circuit-level efficiency gains thus accrued. We make two contributions Firstly, we propose a training algorithm that eliminates the need for tuning individual layers of a DNN ensuring uniformity across layer weights and activations. This ensures analog-blocks that can be reused and peripheral hardware substantially reduced. Secondly, using NAS methods, we propose the use of unipolar-weighted (either all-positive or all-negative weights) matrices/sub-matrices. Weight unipolarity obviates the need for doubling crossbar area leading to simplified analog periphery. We validate our methodology with CIFAR10 and HAR applications by mapping to crossbars using 4-bit and 2-bit devices. We achieve up to 92 91% accuracy (95% floating-point) using 2-bit only-positive weights for HAR. A combination of the proposed techniques leads to 80% area improvement and up to 45% energy reduction.

paper research
No Image

Resampled Statistics for Dependence-Robust Inference

We develop inference procedures robust to general forms of weak dependence. The procedures utilize test statistics constructed by resampling in a manner that does not depend on the unknown correlation structure of the data. We prove that the statistics are asymptotically normal under the weak requirement that the target parameter can be consistently estimated at the parametric rate. This holds for regular estimators under many well-known forms of weak dependence and justifies the claim of dependence-robustness. We consider applications to settings with unknown or complicated forms of dependence, with various forms of network dependence as leading examples. We develop tests for both moment equalities and inequalities.

paper research
Statistical Robustness in Chinese Remainder Theorem for Multiple Numbers

Statistical Robustness in Chinese Remainder Theorem for Multiple Numbers

Generalized Chinese Remainder Theorem (CRT) is a well-known approach to solve ambiguity resolution related problems. In this paper, we study the robust CRT reconstruction for multiple numbers from a view of statistics. To the best of our knowledge, it is the first rigorous analysis on the underlying statistical model of CRT-based multiple parameter estimation. To address the problem, two novel approaches are established. One is to directly calculate a conditional maximum a posteriori probability (MAP) estimation of the residue clustering, and the other is based on a generalized wrapped Gaussian mixture model to iteratively search for MAP of both estimands and clustering. Residue error correcting codes are introduced to improve the robustness further. Experimental results show that the statistical schemes achieve much stronger robustness compared to state-of-the-art deterministic schemes, especially in heavy-noise scenarios.

paper research
Fast and Flexible Bayesian Inference in Time-Varying Parameter Regression Models

Fast and Flexible Bayesian Inference in Time-Varying Parameter Regression Models

In this paper, we write the time-varying parameter (TVP) regression model involving K explanatory variables and T observations as a constant coefficient regression model with KT explanatory variables. In contrast with much of the existing literature which assumes coefficients to evolve according to a random walk, a hierarchical mixture model on the TVPs is introduced. The resulting model closely mimics a random coefficients specification which groups the TVPs into several regimes. These flexible mixtures allow for TVPs that feature a small, moderate or large number of structural breaks. We develop computationally efficient Bayesian econometric methods based on the singular value decomposition of the KT regressors. In artificial data, we find our methods to be accurate and much faster than standard approaches in terms of computation time. In an empirical exercise involving inflation forecasting using a large number of predictors, we find our models to forecast better than alternative approaches and document different patterns of parameter change than are found with approaches which assume random walk evolution of parameters.

paper research
Automated Decision-Making for Electric Power Network Recovery

Automated Decision-Making for Electric Power Network Recovery

Critical infrastructure systems such as electric power networks, water networks, and transportation systems play a major role in the welfare of any community. In the aftermath of disasters, their recovery is of paramount importance; orderly and efficient recovery involves the assignment of limited resources (a combination of human repair workers and machines) to repair damaged infrastructure components. The decision maker must also deal with uncertainty in the outcome of the resource-allocation actions during recovery. The manual assignment of resources seldom is optimal despite the expertise of the decision maker because of the large number of choices and uncertainties in consequences of sequential decisions. This combinatorial assignment problem under uncertainty is known to be mbox{NP-hard}. We propose a novel decision technique that addresses the massive number of decision choices for large-scale real-world problems; in addition, our method also features an experiential learning component that adaptively determines the utilization of the computational resources based on the performance of a small number of choices. Our framework is closed-loop, and naturally incorporates all the attractive features of such a decision-making system. In contrast to myopic approaches, which do not account for the future effects of the current choices, our methodology has an anticipatory learning component that effectively incorporates emph{lookahead} into the solutions. To this end, we leverage the theory of regression analysis, Markov decision processes (MDPs), multi-armed bandits, and stochastic models of community damage from natural disasters to develop a method for near-optimal recovery of communities. Our method contributes to the general problem of MDPs with massive action spaces with application to recovery of communities affected by hazards.

paper research
Saddlepoint Approximations for Spatial Panel Data Models with Fixed Effects and Time-Varying Covariates

Saddlepoint Approximations for Spatial Panel Data Models with Fixed Effects and Time-Varying Covariates

We develop new higher-order asymptotic techniques for the Gaussian maximum likelihood estimator in a spatial panel data model, with fixed effects, time-varying covariates, and spatially correlated errors. Our saddlepoint density and tail area approximation feature relative error of order $O(1/(n(T-1)))$ with $n$ being the cross-sectional dimension and $T$ the time-series dimension. The main theoretical tool is the tilted-Edgeworth technique in a non-identically distributed setting. The density approximation is always non-negative, does not need resampling, and is accurate in the tails. Monte Carlo experiments on density approximation and testing in the presence of nuisance parameters illustrate the good performance of our approximation over first-order asymptotics and Edgeworth expansions. An empirical application to the investment-saving relationship in OECD (Organisation for Economic Co-operation and Development) countries shows disagreement between testing results based on first-order asymptotics and saddlepoint techniques.

paper research
Beyond Demand Estimation  Consumer Surplus Evaluation via Cumulative Propensity Weights

Beyond Demand Estimation Consumer Surplus Evaluation via Cumulative Propensity Weights

This paper develops a practical framework for using observational data to audit the consumer surplus effects of AI-driven decisions, specifically in targeted pricing and algorithmic lending. Traditional approaches first estimate demand functions and then integrate to compute consumer surplus, but these methods can be challenging to implement in practice due to model misspecification in parametric demand forms and the large data requirements and slow convergence of flexible nonparametric or machine learning approaches. Instead, we exploit the randomness inherent in modern algorithmic pricing, arising from the need to balance exploration and exploitation, and introduce an estimator that avoids explicit estimation and numerical integration of the demand function. Each observed purchase outcome at a randomized price is an unbiased estimate of demand and by carefully reweighting purchase outcomes using novel cumulative propensity weights (CPW), we are able to reconstruct the integral. Building on this idea, we introduce a doubly robust variant named the augmented cumulative propensity weighting (ACPW) estimator that only requires one of either the demand model or the historical pricing policy distribution to be correctly specified. Furthermore, this approach facilitates the use of flexible machine learning methods for estimating consumer surplus, since it achieves fast convergence rates by incorporating an estimate of demand, even when the machine learning estimate has slower convergence rates. Neither of these estimators is a standard application of off-policy evaluation techniques as the target estimand, consumer surplus, is unobserved. To address fairness, we extend this framework to an inequality-aware surplus measure, allowing regulators and firms to quantify the profit-equity trade-off. Finally, we validate our methods through comprehensive numerical studies.

paper research

< Category Statistics (Total: 566) >

Computer Science (514) Machine Learning (117) Artificial Intelligence (89) Computer Vision (71) Computation and Language (NLP) (62) Electrical Engineering and Systems Science (36) Cryptography and Security (24) Robotics (22) Systems and Control (22) Software Engineering (20) Mathematics (18) Statistics (17) Economics (16) Information Retrieval (15) Distributed, Parallel, and Cluster Computing (14) Human-Computer Interaction (14) Neural and Evolutionary Computing (13) Computer Science and Game Theory (11) Econometrics (11) Image and Video Processing (10) Physics (10) Sound (10) Multiagent Systems (9) Optimization and Control (8) Computational Geometry (7) Databases (7) Graphics (6) Networking and Internet Architecture (6) Quantitative Biology (6) Quantum Physics (5) Theoretical Economics (5) Computational Complexity (4) Computational Engineering, Finance, and Science (4) Computers and Society (4) Emerging Technologies (4) Information Theory (4) Methodology (4) Multimedia (4) Programming Languages (4) Quantitative Finance (4) Signal Processing (4) Audio and Speech Processing (3) Data Structures and Algorithms (3) Hardware Architecture (3) History and Philosophy of Physics (3) Logic in Computer Science (3) Neurons and Cognition (3) Social and Information Networks (3) Statistics Theory (3) Computation (2) Condensed Matter (2) Dynamical Systems (2) Formal Languages and Automata Theory (2) General Finance (2) Operating Systems (2) Optics (2) Quantitative Methods (2) Applications (1) Astrophysics (1) Combinatorics (1) Computational Physics (1) Digital Libraries (1) Disordered Systems and Neural Networks (1) General Economics (1) Genomics (1) Geophysics (1) Instrumentation and Methods for Astrophysics (1) Logic (1) Mathematical Finance (1) Mathematical Software (1) Medical Physics (1) Mesoscale and Nanoscale Physics (1) Metric Geometry (1) Other Statistics (1) Performance (1) Physics and Society (1) Plasma Physics (1) Probability (1) Trading and Market Microstructure (1)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut