KOINEU Logo
Probability-Aware Parking Choice

Probability-Aware Parking Choice

Current parking navigation systems often underestimate total travel time by failing to account for the time spent searching for a parking space, which significantly affects user experience, mode choice, congestion, and emissions. To address this issue, this paper introduces the probability-aware parking selection problem, which aims to direct drivers to the best parking location rather than straight to their destination. An adaptable dynamic programming framework is proposed for decision-making based on probabilistic information about parking availability at the parking lot level. Closed-form analysis determines when it is optimal to target a specific parking lot or explore alternatives, as well as the expected time cost. Sensitivity analysis and three illustrative cases are examined, demonstrating the model s ability to account for the dynamic nature of parking availability. Acknowledging the financial costs of permanent sensing infrastructure, the paper provides analytical and empirical assessments of errors incurred when leveraging stochastic observations to estimate parking availability. Experiments with real-world data from the US city of Seattle indicate this approach s viability, with mean absolute error decreasing from 7% to below 2% as observation frequency grows. In data-based simulations, probability-aware strategies demonstrate time savings up to 66% relative to probability-unaware baselines, yet still take up to 123% longer than direct-to-destination estimates.

paper research
CoCo-Fed  A Unified Framework for Memory- and Communication-Efficient Federated Learning at the Wireless Edge

CoCo-Fed A Unified Framework for Memory- and Communication-Efficient Federated Learning at the Wireless Edge

The deployment of large-scale neural networks within the Open Radio Access Network (O-RAN) architecture is pivotal for enabling native edge intelligence. However, this paradigm faces two critical bottlenecks the prohibitive memory footprint required for local training on resource-constrained gNBs, and the saturation of bandwidth-limited backhaul links during the global aggregation of high-dimensional model updates. To address these challenges, we propose CoCo-Fed, a novel Compression and Combination-based Federated learning framework that unifies local memory efficiency and global communication reduction. Locally, CoCo-Fed breaks the memory wall by performing a double-dimension down-projection of gradients, adapting the optimizer to operate on low-rank structures without introducing additional inference parameters/latency. Globally, we introduce a transmission protocol based on orthogonal subspace superposition, where layer-wise updates are projected and superimposed into a single consolidated matrix per gNB, drastically reducing the backhaul traffic. Beyond empirical designs, we establish a rigorous theoretical foundation, proving the convergence of CoCo-Fed even under unsupervised learning conditions suitable for wireless sensing tasks. Extensive simulations on an angle-of-arrival estimation task demonstrate that CoCo-Fed significantly outperforms state-of-the-art baselines in both memory and communication efficiency while maintaining robust convergence under non-IID settings.

paper research
HFedMoE  Resource-Aware Heterogeneous Federated Learning with Mixture-of-Experts

HFedMoE Resource-Aware Heterogeneous Federated Learning with Mixture-of-Experts

While federated learning (FL) enables fine-tuning of large language models (LLMs) without compromising data privacy, the substantial size of an LLM renders on-device training impractical for resource-constrained clients, such as mobile devices. Thus, Mixture-of-Experts (MoE) models have emerged as a computation-efficient solution, which activates only a sparse subset of experts during model training to reduce computing burden without sacrificing performance. Though integrating MoE into FL fine-tuning holds significant potential, it still encounters three key challenges i) selecting appropriate experts for clients remains challenging due to the lack of a reliable metric to measure each expert s impact on local fine-tuning performance, ii) the heterogeneous computing resources across clients severely hinder MoE-based LLM fine-tuning, as dynamic expert activations across diverse input samples can overwhelm resource-constrained devices, and iii) client-specific expert subsets and routing preference undermine global aggregation, where misaligned expert updates and inconsistent gating networks in troduce destructive interference. To address these challenges, we propose HFedMoE, a heterogeneous MoE-based FL fine-tuning framework that customizes a subset of experts to each client for computation-efficient LLM fine-tuning. Specifically, HFedMoE identifies the expert importance based on its contributions to fine-tuning performance, and then adaptively selects a subset of experts from an information bottleneck perspective to align with each client s computing budget. A sparsity-aware model aggregation strategy is also designed to aggregate the actively fine-tuned experts and gating parameters with importance weighted contributions. Extensive experiments demonstrate that HFedMoE outperforms state-of-the-art benchmarks in training accuracy and convergence speed.

paper research
Decoupling Amplitude and Phase Attention in the Frequency Domain for RGB-Event Based Visual Object Tracking

Decoupling Amplitude and Phase Attention in the Frequency Domain for RGB-Event Based Visual Object Tracking

Existing RGB-Event visual object tracking approaches primarily rely on conventional feature-level fusion, failing to fully exploit the unique advantages of event cameras. In particular, the high dynamic range and motion-sensitive nature of event cameras are often overlooked, while low-information regions are processed uniformly, leading to unnecessary computational overhead for the backbone network. To address these issues, we propose a novel tracking framework that performs early fusion in the frequency domain, enabling effective aggregation of high-frequency information from the event modality. Specifically, RGB and event modalities are transformed from the spatial domain to the frequency domain via the Fast Fourier Transform, with their amplitude and phase components decoupled. High-frequency event information is selectively fused into RGB modality through amplitude and phase attention, enhancing feature representation while substantially reducing backbone computation. In addition, a motion-guided spatial sparsification module leverages the motion-sensitive nature of event cameras to capture the relationship between target motion cues and spatial probability distribution, filtering out low-information regions and enhancing target-relevant features. Finally, a sparse set of target-relevant features is fed into the backbone network for learning, and the tracking head predicts the final target position. Extensive experiments on three widely used RGB-Event tracking benchmark datasets, including FE108, FELT, and COESOT, demonstrate the high performance and efficiency of our method. The source code of this paper will be released on https //github.com/Event-AHU/OpenEvTracking

paper research
No Image

Hierarchical Adaptive Evaluation of LLMs and SAST Tools for CWE Prediction in Python

Large Language Models have become integral to software development, yet they frequently generate vulnerable code. Existing code vulnerability detection benchmarks employ binary classification, lacking the CWE-level specificity required for actionable feedback in iterative correction systems. We present ALPHA (Adaptive Learning via Penalty in Hierarchical Assessment), the first function-level Python benchmark that evaluates both LLMs and SAST tools using hierarchically aware, CWE-specific penalties. ALPHA distinguishes between over-generalisation, over-specification, and lateral errors, reflecting practical differences in diagnostic utility. Evaluating seven LLMs and two SAST tools, we find LLMs substantially outperform SAST, though SAST demonstrates higher precision when detections occur. Critically, prediction consistency varies dramatically across models (8.26%-81.87% agreement), with significant implications for feedback-driven systems. We further outline a pathway for future work incorporating ALPHA penalties into supervised fine-tuning, which could provide principled hierarchy-aware vulnerability detection pending empirical validation.

paper research
KGCE  Knowledge-Enhanced Dual-Graph Evaluator for Cross-Platform Assessment of Educational Agents Using Multimodal Language Models

KGCE Knowledge-Enhanced Dual-Graph Evaluator for Cross-Platform Assessment of Educational Agents Using Multimodal Language Models

With the rapid adoption of multimodal large language models (MLMs) in autonomous agents, cross-platform task execution capabilities in educational settings have garnered significant attention. However, existing benchmark frameworks still exhibit notable deficiencies in supporting cross-platform tasks in educational contexts, especially when dealing with school-specific software (such as XiaoYa Intelligent Assistant, HuaShi XiaZi, etc.), where the efficiency of agents often significantly decreases due to a lack of understanding of the structural specifics of these private-domain software. Additionally, current evaluation methods heavily rely on coarse-grained metrics like goal orientation or trajectory matching, making it challenging to capture the detailed execution and efficiency of agents in complex tasks. To address these issues, we propose KGCE (Knowledge-Augmented Dual-Graph Evaluator for Cross-Platform Educational Agent Benchmarking with Multimodal Language Models), a novel benchmarking platform that integrates knowledge base enhancement and a dual-graph evaluation framework. We first constructed a dataset comprising 104 education-related tasks, covering Windows, Android, and cross-platform collaborative tasks. KGCE introduces a dual-graph evaluation framework that decomposes tasks into multiple sub-goals and verifies their completion status, providing fine-grained evaluation metrics. To overcome the execution bottlenecks of existing agents in private-domain tasks, we developed an enhanced agent system incorporating a knowledge base specific to school-specific software. The code can be found at https //github.com/Kinginlife/KGCE.

paper research
Scale-Adaptive Power Flow Analysis Using Local Topology Slicing and Multi-Task Graph Learning

Scale-Adaptive Power Flow Analysis Using Local Topology Slicing and Multi-Task Graph Learning

Developing deep learning models with strong adaptability to topological variations is of great practical significance for power flow analysis. To enhance model performance under variable system scales and improve robustness in branch power prediction, this paper proposes a Scale-adaptive Multi-task Power Flow Analysis (SaMPFA) framework. SaMPFA introduces a Local Topology Slicing (LTS) sampling technique that extracts subgraphs of different scales from the complete power network to strengthen the model s cross-scale learning capability. Furthermore, a Reference-free Multi-task Graph Learning (RMGL) model is designed for robust power flow prediction. Unlike existing approaches, RMGL predicts bus voltages and branch powers instead of phase angles. This design not only avoids the risk of error amplification in branch power calculation but also guides the model to learn the physical relationships of phase angle differences. In addition, the loss function incorporates extra terms that encourage the model to capture the physical patterns of angle differences and power transmission, further improving consistency between predictions and physical laws. Simulations on the IEEE 39-bus system and a real provincial grid in China demonstrate that the proposed model achieves superior adaptability and generalization under variable system scales, with accuracy improvements of 4.47% and 36.82%, respectively.

paper research
REE-TTT  Highly Adaptive Radar Echo Extrapolation Using Test-Time Training

REE-TTT Highly Adaptive Radar Echo Extrapolation Using Test-Time Training

Precipitation nowcasting is critically important for meteorological forecasting. Deep learning-based Radar Echo Extrapolation (REE) has become a predominant nowcasting approach, yet it suffers from poor generalization due to its reliance on high-quality local training data and static model parameters, limiting its applicability across diverse regions and extreme events. To overcome this, we propose REE-TTT, a novel model that incorporates an adaptive Test-Time Training (TTT) mechanism. The core of our model lies in the newly designed Spatio-temporal Test-Time Training (ST-TTT) block, which replaces the standard linear projections in TTT layers with task-specific attention mechanisms, enabling robust adaptation to non-stationary meteorological distributions and thereby significantly enhancing the feature representation of precipitation. Experiments under cross-regional extreme precipitation scenarios demonstrate that REE-TTT substantially outperforms state-of-the-art baseline models in prediction accuracy and generalization, exhibiting remarkable adaptability to data distribution shifts.

paper research
Digital Twin-Driven Communication-Efficient Federated Anomaly Detection for Industrial IoT

Digital Twin-Driven Communication-Efficient Federated Anomaly Detection for Industrial IoT

Anomaly detection is increasingly becoming crucial for maintaining the safety, reliability, and efficiency of industrial systems. Recently, with the advent of digital twins and data-driven decision-making, several statistical and machine-learning methods have been proposed. However, these methods face several challenges, such as dependence on only real sensor datasets, limited labeled data, high false alarm rates, and privacy concerns. To address these problems, we propose a suite of digital twin-integrated federated learning (DTFL) methods that enhance global model performance while preserving data privacy and communication efficiency. Specifically, we present five novel approaches Digital Twin-Based Meta-Learning (DTML), Federated Parameter Fusion (FPF), Layer-wise Parameter Exchange (LPE), Cyclic Weight Adaptation (CWA), and Digital Twin Knowledge Distillation (DTKD). Each method introduces a unique mechanism to combine synthetic and real-world knowledge, balancing generalization with communication overhead. We conduct an extensive experiment using a publicly available cyber-physical anomaly detection dataset. For a target accuracy of 80%, CWA reaches the target in 33 rounds, FPF in 41 rounds, LPE in 48 rounds, and DTML in 87 rounds, whereas the standard FedAvg baseline and DTKD do not reach the target within 100 rounds. These results highlight substantial communication-efficiency gains (up to 62% fewer rounds than DTML and 31% fewer than LPE) and demonstrate that integrating DT knowledge into FL accelerates convergence to operationally meaningful accuracy thresholds for IIoT anomaly detection.

paper research
Technical Report on K-EXAONE

Technical Report on K-EXAONE

This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.

paper research
Sparse Threats, Focused Defense  Robust Reinforcement Learning Aware of Criticality for Safe Autonomous Driving

Sparse Threats, Focused Defense Robust Reinforcement Learning Aware of Criticality for Safe Autonomous Driving

Reinforcement learning (RL) has shown considerable potential in autonomous driving (AD), yet its vulnerability to perturbations remains a critical barrier to real-world deployment. As a primary countermeasure, adversarial training improves policy robustness by training the AD agent in the presence of an adversary that deliberately introduces perturbations. Existing approaches typically model the interaction as a zero-sum game with continuous attacks. However, such designs overlook the inherent asymmetry between the agent and the adversary and then fail to reflect the sparsity of safety-critical risks, rendering the achieved robustness inadequate for practical AD scenarios. To address these limitations, we introduce criticality-aware robust RL (CARRL), a novel adversarial training approach for handling sparse, safety-critical risks in autonomous driving. CARRL consists of two interacting components a risk exposure adversary (REA) and a risk-targeted robust agent (RTRA). We model the interaction between the REA and RTRA as a general-sum game, allowing the REA to focus on exposing safety-critical failures (e.g., collisions) while the RTRA learns to balance safety with driving efficiency. The REA employs a decoupled optimization mechanism to better identify and exploit sparse safety-critical moments under a constrained budget. However, such focused attacks inevitably result in a scarcity of adversarial data. The RTRA copes with this scarcity by jointly leveraging benign and adversarial experiences via a dual replay buffer and enforces policy consistency under perturbations to stabilize behavior. Experimental results demonstrate that our approach reduces the collision rate by at least 22.66 % across all cases compared to state-of-the-art baseline methods.

paper research
COMPASS  A Framework for Assessing Organization-Specific Policy Compliance in LLMs

COMPASS A Framework for Assessing Organization-Specific Policy Compliance in LLMs

As large language models are deployed in high-stakes enterprise applications, from healthcare to finance, ensuring adherence to organization-specific policies has become essential. Yet existing safety evaluations focus exclusively on universal harms. We present COMPASS (Company/Organization Policy Alignment Assessment), the first systematic framework for evaluating whether LLMs comply with organizational allowlist and denylist policies. We apply COMPASS to eight diverse industry scenarios, generating and validating 5,920 queries that test both routine compliance and adversarial robustness through strategically designed edge cases. Evaluating seven state-of-the-art models, we uncover a fundamental asymmetry models reliably handle legitimate requests (>95% accuracy) but catastrophically fail at enforcing prohibitions, refusing only 13-40% of adversarial denylist violations. These results demonstrate that current LLMs lack the robustness required for policy-critical deployments, establishing COMPASS as an essential evaluation framework for organizational AI safety.

paper research
Enhancing Object Detection with Privileged Information  A General Teacher-Student Approach

Enhancing Object Detection with Privileged Information A General Teacher-Student Approach

This paper investigates the integration of the Learning Using Privileged Information (LUPI) paradigm in object detection to exploit fine-grained, descriptive information available during training but not at inference. We introduce a general, model-agnostic methodology for injecting privileged information-such as bounding box masks, saliency maps, and depth cues-into deep learning-based object detectors through a teacher-student architecture. Experiments are conducted across five state-of-the-art object detection models and multiple public benchmarks, including UAV-based litter detection datasets and Pascal VOC 2012, to assess the impact on accuracy, generalization, and computational efficiency. Our results demonstrate that LUPI-trained students consistently outperform their baseline counterparts, achieving significant boosts in detection accuracy with no increase in inference complexity or model size. Performance improvements are especially marked for medium and large objects, while ablation studies reveal that intermediate weighting of teacher guidance optimally balances learning from privileged and standard inputs. The findings affirm that the LUPI framework provides an effective and practical strategy for advancing object detection systems in both resource-constrained and real-world settings.

paper research
EverMemOS  A Self-Organizing Memory Operating System for Long-Term Reasoning Tasks

EverMemOS A Self-Organizing Memory Operating System for Long-Term Reasoning Tasks

Large Language Models (LLMs) are increasingly deployed as long-term interactive agents, yet their limited context windows make it difficult to sustain coherent behavior over extended interactions. Existing memory systems often store isolated records and retrieve fragments, limiting their ability to consolidate evolving user states and resolve conflicts. We introduce EverMemOS, a self-organizing memory operating system that implements an engram-inspired lifecycle for computational memory. Episodic Trace Formation converts dialogue streams into MemCells that capture episodic traces, atomic facts, and time-bounded Foresight signals. Semantic Consolidation organizes MemCells into thematic MemScenes, distilling stable semantic structures and updating user profiles. Reconstructive Recollection performs MemScene-guided agentic retrieval to compose the necessary and sufficient context for downstream reasoning. Experiments on LoCoMo and LongMemEval show that EverMemOS achieves state-of-the-art performance on memory-augmented reasoning tasks. We further report a profile study on PersonaMem v2 and qualitative case studies illustrating chat-oriented capabilities such as user profiling and Foresight. Code is available at https //github.com/EverMind-AI/EverMemOS.

paper research
Saddlepoint Approximations for Spatial Panel Data Models with Fixed Effects and Time-Varying Covariates

Saddlepoint Approximations for Spatial Panel Data Models with Fixed Effects and Time-Varying Covariates

We develop new higher-order asymptotic techniques for the Gaussian maximum likelihood estimator in a spatial panel data model, with fixed effects, time-varying covariates, and spatially correlated errors. Our saddlepoint density and tail area approximation feature relative error of order $O(1/(n(T-1)))$ with $n$ being the cross-sectional dimension and $T$ the time-series dimension. The main theoretical tool is the tilted-Edgeworth technique in a non-identically distributed setting. The density approximation is always non-negative, does not need resampling, and is accurate in the tails. Monte Carlo experiments on density approximation and testing in the presence of nuisance parameters illustrate the good performance of our approximation over first-order asymptotics and Edgeworth expansions. An empirical application to the investment-saving relationship in OECD (Organisation for Economic Co-operation and Development) countries shows disagreement between testing results based on first-order asymptotics and saddlepoint techniques.

paper research
No Image

Toward an Ontology for Defining Scenarios in the Evaluation of Automated Vehicles An Object-Oriented Framework

The development of new assessment methods for the performance of automated vehicles is essential to enable the deployment of automated driving technologies, due to the complex operational domain of automated vehicles. One contributing method is scenario-based assessment in which test cases are derived from real-world road traffic scenarios obtained from driving data. Given the complexity of the reality that is being modeled in these scenarios, it is a challenge to define a structure for capturing these scenarios. An intensional definition that provides a set of characteristics that are deemed to be both necessary and sufficient to qualify as a scenario assures that the scenarios constructed are both complete and intercomparable. In this article, we develop a comprehensive and operable definition of the notion of scenario while considering existing definitions in the literature. This is achieved by proposing an object-oriented framework in which scenarios and their building blocks are defined as classes of objects having attributes, methods, and relationships with other objects. The object-oriented approach promotes clarity, modularity, reusability, and encapsulation of the objects. We provide definitions and justifications of each of the terms. Furthermore, the framework is used to translate the terms in a coding language that is publicly available.

paper research
A Comprehensive Study on Temporal Modeling for Online Action Detection

A Comprehensive Study on Temporal Modeling for Online Action Detection

Online action detection (OAD) is a practical yet challenging task, which has attracted increasing attention in recent years. A typical OAD system mainly consists of three modules a frame-level feature extractor which is usually based on pre-trained deep Convolutional Neural Networks (CNNs), a temporal modeling module, and an action classifier. Among them, the temporal modeling module is crucial which aggregates discriminative information from historical and current features. Though many temporal modeling methods have been developed for OAD and other topics, their effects are lack of investigation on OAD fairly. This paper aims to provide a comprehensive study on temporal modeling for OAD including four meta types of temporal modeling methods, ie temporal pooling, temporal convolution, recurrent neural networks, and temporal attention, and uncover some good practices to produce a state-of-the-art OAD system. Many of them are explored in OAD for the first time, and extensively evaluated with various hyper parameters. Furthermore, based on our comprehensive study, we present several hybrid temporal modeling methods, which outperform the recent state-of-the-art methods with sizable margins on THUMOS-14 and TVSeries.

paper research
QoE-Driven Coupled Uplink and Downlink Rate Adaptation for 360-Degree Video Live Streaming

QoE-Driven Coupled Uplink and Downlink Rate Adaptation for 360-Degree Video Live Streaming

360-degree video provides an immersive 360-degree viewing experience and has been widely used in many areas. The 360-degree video live streaming systems involve capturing, compression, uplink (camera to video server) and downlink (video server to user) transmissions. However, few studies have jointly investigated such complex systems, especially the rate adaptation for the coupled uplink and downlink in the 360-degree video streaming under limited bandwidth constraints. In this letter, we propose a quality of experience (QoE)-driven 360-degree video live streaming system, in which a video server performs rate adaptation based on the uplink and downlink bandwidths and information concerning each user s real-time field-of-view (FOV). We formulate it as a nonlinear integer programming problem and propose an algorithm, which combines the Karush-Kuhn-Tucker (KKT) condition and branch and bound method, to solve it. The numerical results show that the proposed optimization model can improve users QoE significantly in comparison with other baseline schemes.

paper research
I Feel You  A Theory of Mind Experiment in Gaming

I Feel You A Theory of Mind Experiment in Gaming

In this study into the player s emotional theory of mind of gameplaying agents, we investigate how an agent s behaviour and the player s own performance and emotions shape the recognition of a frustrated behaviour. We focus on the perception of frustration as it is a prevalent affective experience in human-computer interaction. We present a testbed game tailored towards this end, in which a player competes against an agent with a frustration model based on theory. We collect gameplay data, an annotated ground truth about the player s appraisal of the agent s frustration, and apply face recognition to estimate the player s emotional state. We examine the collected data through correlation analysis and predictive machine learning models, and find that the player s observable emotions are not correlated highly with the perceived frustration of the agent. This suggests that our subject s theory of mind is a cognitive process based on the gameplay context. Our predictive models---using ranking support vector machines---corroborate these results, yielding moderately accurate predictors of players theory of mind.

paper research
Sharing Resources in Mobile Edge Clouds  A Game-Theoretic Approach

Sharing Resources in Mobile Edge Clouds A Game-Theoretic Approach

Mobile edge computing seeks to provide resources to different delay-sensitive applications. This is a challenging problem as an edge cloud-service provider may not have sufficient resources to satisfy all resource requests. Furthermore, allocating available resources optimally to different applications is also challenging. Resource sharing among different edge cloud-service providers can address the aforementioned limitation as certain service providers may have resources available that can be ``rented by other service providers. However, edge cloud service providers can have different objectives or emph{utilities}. Therefore, there is a need for an efficient and effective mechanism to share resources among service providers, while considering the different objectives of various providers. We model resource sharing as a multi-objective optimization problem and present a solution framework based on emph{Cooperative Game Theory} (CGT). We consider the strategy where each service provider allocates resources to its native applications first and shares the remaining resources with applications from other service providers. We prove that for a monotonic, non-decreasing utility function, the game is canonical and convex. Hence, the emph{core} is not empty and the grand coalition is stable. We propose two algorithms emph{Game-theoretic Pareto optimal allocation} (GPOA) and emph{Polyandrous-Polygamous Matching based Pareto Optimal Allocation} (PPMPOA) that provide allocations from the core. Hence the obtained allocations are emph{Pareto} optimal and the grand coalition of all the service providers is stable. Experimental results confirm that our proposed resource sharing framework improves utilities of edge cloud-service providers and application request satisfaction.

paper research
GUIComp  A Real-Time, Multi-Faceted GUI Design Assistant

GUIComp A Real-Time, Multi-Faceted GUI Design Assistant

Users may face challenges while designing graphical user interfaces, due to a lack of relevant experience and guidance. This paper aims to investigate the issues that users with no experience face during the design process, and how to resolve them. To this end, we conducted semi-structured interviews, based on which we built a GUI prototyping assistance tool called GUIComp. This tool can be connected to GUI design software as an extension, and it provides real-time, multi-faceted feedback on a user s current design. Additionally, we conducted two user studies, in which we asked participants to create mobile GUIs with or without GUIComp, and requested online workers to assess the created GUIs. The experimental results show that GUIComp facilitated iterative design and the participants with GUIComp had better a user experience and produced more acceptable designs than those who did not.

paper research
Enhancing Single-Port Memory Performance to Match Multi-Port Capabilities Using Coding Techniques

Enhancing Single-Port Memory Performance to Match Multi-Port Capabilities Using Coding Techniques

Many performance critical systems today must rely on performance enhancements, such as multi-port memories, to keep up with the increasing demand of memory-access capacity. However, the large area footprints and complexity of existing multi-port memory designs limit their applicability. This paper explores a coding theoretic framework to address this problem. In particular, this paper introduces a framework to encode data across multiple single-port memory banks in order to { em algorithmically} realize the functionality of multi-port memory. This paper proposes three code designs with significantly less storage overhead compared to the existing replication based emulations of multi-port memories. To further improve performance, we also demonstrate a memory controller design that utilizes redundancy across coded memory banks to more efficiently schedule read and write requests sent across multiple cores. Furthermore, guided by DRAM traces, the paper explores { em dynamic coding} techniques to improve the efficiency of the coding based memory design. We then show significant performance improvements in critical word read and write latency in the proposed coded-memory design when compared to a traditional uncoded-memory design.

paper research
No Image

Peer-to-Peer Energy Trading in Electricity Networks An Overview

Peer-to-peer trading is a next-generation energy management technique that economically benefits proactive consumers (prosumers) transacting their energy as goods and services. At the same time, peer-to-peer energy trading is also expected to help the grid by reducing peak demand, lowering reserve requirements, and curtailing network loss. However, large-scale deployment of peer-to-peer trading in electricity networks poses a number of challenges in modeling transactions in both the virtual and physical layers of the network. As such, this article provides a comprehensive review of the state-of-the-art in research on peer-to-peer energy trading techniques. By doing so, we provide an overview of the key features of peer-to-peer trading and its benefits of relevance to the grid and prosumers. Then, we systematically classify the existing research in terms of the challenges that the studies address in the virtual and the physical layers. We then further identify and discuss those technical approaches that have been extensively used to address the challenges in peer-to-peer transactions. Finally, the paper is concluded with potential future research directions.

paper research
VC Dimensions of Nondeterministic Finite Automata for Words of Equal Length

VC Dimensions of Nondeterministic Finite Automata for Words of Equal Length

Let $NFA_b(q)$ denote the set of languages accepted by nondeterministic finite automata with $q$ states over an alphabet with $b$ letters. Let $B_n$ denote the set of words of length $n$. We give a quadratic lower bound on the VC dimension of [ NFA_2(q) cap B_n = {L cap B_n mid L in NFA_2(q) } ] as a function of $q$. Next, the work of Gruber and Holzer (2007) gives an upper bound for the nondeterministic state complexity of finite languages contained in $B_n$, which we strengthen using our methods. Finally, we give some theoretical and experimental results on the dependence on $n$ of the VC dimension and testing dimension of $NFA_2(q) cap B_n$.

paper research
Button Simulation and Design Using FDVV Models

Button Simulation and Design Using FDVV Models

Designing a push-button with desired sensation and performance is challenging because the mechanical construction must have the right response characteristics. Physical simulation of a button s force-displacement (FD) response has been studied to facilitate prototyping; however, the simulations scope and realism have been limited. In this paper, we extend FD modeling to include vibration (V) and velocity-dependence characteristics (V). The resulting FDVV models better capture tactility characteristics of buttons, including snap. They increase the range of simulated buttons and the perceived realism relative to FD models. The paper also demonstrates methods for obtaining these models, editing them, and simulating accordingly. This end-to-end approach enables the analysis, prototyping, and optimization of buttons, and supports exploring designs that would be hard to implement mechanically.

paper research
Optimizing Gait Graphs  Generating Variable Gaits from a Base Gait for Lower-Limb Rehabilitation Exoskeleton Robots

Optimizing Gait Graphs Generating Variable Gaits from a Base Gait for Lower-Limb Rehabilitation Exoskeleton Robots

The most concentrated application of lower-limb rehabilitation exoskeleton (LLE) robot is that it can help paraplegics re-walk . However, walking in daily life is more than just walking on flat ground with fixed gait. This paper focuses on variable gaits generation for LLE robot to adapt complex walking environment. Different from traditional gaits generator for biped robot, the generated gaits for LLEs should be comfortable to patients. Inspired by the pose graph optimization algorithm in SLAM, we propose a graph-based gait generation algorithm called gait graph optimization (GGO) to generate variable, functional and comfortable gaits from one base gait collected from healthy individuals to adapt the walking environment. Variants of walking problem, e.g., stride adjustment, obstacle avoidance, and stair ascent and descent, help verify the proposed approach in simulation and experimentation. We open source our implementation.

paper research
Dynamic Radar Network of UAVs  A Joint Navigation and Tracking Approach

Dynamic Radar Network of UAVs A Joint Navigation and Tracking Approach

Nowadays there is a growing research interest on the possibility of enriching small flying robots with autonomous sensing and online navigation capabilities. This will enable a large number of applications spanning from remote surveillance to logistics, smarter cities and emergency aid in hazardous environments. In this context, an emerging problem is to track unauthorized small unmanned aerial vehicles (UAVs) hiding behind buildings or concealing in large UAV networks. In contrast with current solutions mainly based on static and on-ground radars, this paper proposes the idea of a dynamic radar network of UAVs for real-time and high-accuracy tracking of malicious targets. To this end, we describe a solution for real-time navigation of UAVs to track a dynamic target using heterogeneously sensed information. Such information is shared by the UAVs with their neighbors via multi-hops, allowing tracking the target by a local Bayesian estimator running at each agent. Since not all the paths are equal in terms of information gathering point-of-view, the UAVs plan their own trajectory by minimizing the posterior covariance matrix of the target state under UAV kinematic and anti-collision constraints. Our results show how a dynamic network of radars attains better localization results compared to a fixed configuration and how the on-board sensor technology impacts the accuracy in tracking a target with different radar cross sections, especially in non line-of-sight (NLOS) situations.

paper research
Evolving an Interpretable Model Front for Data Visualization Using Genetic Programming

Evolving an Interpretable Model Front for Data Visualization Using Genetic Programming

Data visualisation is a key tool in data mining for understanding big datasets. Many visualisation methods have been proposed, including the well-regarded state-of-the-art method t-Distributed Stochastic Neighbour Embedding. However, the most powerful visualisation methods have a significant limitation the manner in which they create their visualisation from the original features of the dataset is completely opaque. Many domains require an understanding of the data in terms of the original features; there is hence a need for powerful visualisation methods which use understandable models. In this work, we propose a genetic programming approach named GPtSNE for evolving interpretable mappings from a dataset to highquality visualisations. A multi-objective approach is designed that produces a variety of visualisations in a single run which give different trade-offs between visual quality and model complexity. Testing against baseline methods on a variety of datasets shows the clear potential of GP-tSNE to allow deeper insight into data than that provided by existing visualisation methods. We further highlight the benefits of a multi-objective approach through an in-depth analysis of a candidate front, which shows how multiple models can

paper research
Can I Trust You? A User Study on Robot-Mediated Support Groups

Can I Trust You? A User Study on Robot-Mediated Support Groups

Socially assistive robots have the potential to improve group dynamics when interacting with groups of people in social settings. This work contributes to the understanding of those dynamics through a user study of trust dynamics in the novel context of a robot mediated support group. For this study, a novel framework for robot mediation of a support group was developed and validated. To evaluate interpersonal trust in the multi-party setting, a dyadic trust scale was implemented and found to be uni-factorial, validating it as an appropriate measure of general trust. The results of this study demonstrate a significant increase in average interpersonal trust after the group interaction session, and qualitative post-session interview data report that participants found the interaction helpful and successfully supported and learned from one other. The results of the study validate that a robot-mediated support group can improve trust among strangers and allow them to share and receive support for their academic stress.

paper research
Edge Matching with Inequalities, Triangular Pieces, Unknown Shape, and Two Players

Edge Matching with Inequalities, Triangular Pieces, Unknown Shape, and Two Players

We analyze the computational complexity of several new variants of edge-matching puzzles. First we analyze inequality (instead of equality) constraints between adjacent tiles, proving the problem NP-complete for strict inequalities but polynomial for nonstrict inequalities. Second we analyze three types of triangular edge matching, of which one is polynomial and the other two are NP-complete; all three are #P-complete. Third we analyze the case where no target shape is specified, and we merely want to place the (square) tiles so that edges match (exactly); this problem is NP-complete. Fourth we consider four 2-player games based on $1 times n$ edge matching, all four of which are PSPACE-complete. Most of our NP-hardness reductions are parsimonious, newly proving #P and ASP-completeness for, e.g., $1 times n$ edge matching.

paper research
No Image

Resampled Statistics for Dependence-Robust Inference

We develop inference procedures robust to general forms of weak dependence. The procedures utilize test statistics constructed by resampling in a manner that does not depend on the unknown correlation structure of the data. We prove that the statistics are asymptotically normal under the weak requirement that the target parameter can be consistently estimated at the parametric rate. This holds for regular estimators under many well-known forms of weak dependence and justifies the claim of dependence-robustness. We consider applications to settings with unknown or complicated forms of dependence, with various forms of network dependence as leading examples. We develop tests for both moment equalities and inequalities.

paper research
When to Restrict Provider Entry in Mandated Purchase Markets

When to Restrict Provider Entry in Mandated Purchase Markets

We study a problem inspired by regulated health insurance markets, such as those created by the government in the Affordable Care Act Exchanges or by employers when they contract with private insurers to provide plans for their employees. The market regulator can choose to do nothing, running a Free Market, or can exercise her regulatory power by limiting the entry of providers (decreasing consumer welfare by limiting options, but also decreasing revenue via enhanced competition). We investigate whether limiting entry increases or decreases the utility (welfare minus revenue) of the consumers who purchase from the providers, specifically in settings where the outside option of purchasing nothing is prohibitively undesirable. We focus primarily on the case where providers are symmetric. We propose a sufficient condition on the distribution of consumer values for (a) a unique symmetric equilibrium to exist in both markets and (b) utility to be higher with limited entry. (We also establish that these conclusions do not necessarily hold for all distributions, and therefore some condition is necessary.) Our techniques are primarily based on tools from revenue maximization, and in particular Myerson s virtual value theory. We also consider extensions to settings where providers have identical costs for providing plans, and to two providers with an asymmetric distribution.

paper research
No Image

Recovering Latent Variables Through Matching

We propose an optimal-transport-based matching method to nonparametrically estimate linear models with independent latent variables. The method consists in generating pseudo-observations from the latent variables, so that the Euclidean distance between the model s predictions and their matched counterparts in the data is minimized. We show that our nonparametric estimator is consistent, and we document that it performs well in simulated data. We apply this method to study the cyclicality of permanent and transitory income shocks in the Panel Study of Income Dynamics. We find that the dispersion of income shocks is approximately acyclical, whereas the skewness of permanent shocks is procyclical. By comparison, we find that the dispersion and skewness of shocks to hourly wages vary little with the business cycle.

paper research
Stability of User Equilibria in Heterogeneous Routing Games

Stability of User Equilibria in Heterogeneous Routing Games

The asymptotic behaviour of deterministic logit dynamics in heterogeneous routing games is analyzed. It is proved that in directed multigraphs with parallel routes, and in series composition of such multigraphs, the dynamics admits a globally asymptotically stable fixed point. Moreover, the unique fixed point of the dynamics approaches the set of Wardrop equilibria, as the noise vanishes. The result relies on the fact that the dynamics of aggregate flows is monotone, and its Jacobian is strictly diagonally dominant by columns.

paper research
Near-Optimal Algorithms for Geometric Centers and Depth Problems

Near-Optimal Algorithms for Geometric Centers and Depth Problems

$ renewcommand{ Re}{ mathbb{R}}$ We develop a general randomized technique for solving implic it linear programming problems, where the collection of constraints are defined implicitly by an underlying ground set of elements. In many cases, the structure of the implicitly defined constraints can be exploited in order to obtain efficient linear program solvers. We apply this technique to obtain near-optimal algorithms for a variety of fundamental problems in geometry. For a given point set $P$ of size $n$ in $ Re^d$, we develop algorithms for computing geometric centers of a point set, including the centerpoint and the Tukey median, and several other more involved measures of centrality. For $d=2$, the new algorithms run in $O(n log n)$ expected time, which is optimal, and for higher constant $d>2$, the expected time bound is within one logarithmic factor of $O(n^{d-1})$, which is also likely near optimal for some of the problems.

paper research
Prediction Intervals for Synthetic Control Methods

Prediction Intervals for Synthetic Control Methods

Uncertainty quantification is a fundamental problem in the analysis and interpretation of synthetic control (SC) methods. We develop conditional prediction intervals in the SC framework, and provide conditions under which these intervals offer finite-sample probability guarantees. Our method allows for covariate adjustment and non-stationary data. The construction begins by noting that the statistical uncertainty of the SC prediction is governed by two distinct sources of randomness one coming from the construction of the (likely misspecified) SC weights in the pre-treatment period, and the other coming from the unobservable stochastic error in the post-treatment period when the treatment effect is analyzed. Accordingly, our proposed prediction intervals are constructed taking into account both sources of randomness. For implementation, we propose a simulation-based approach along with finite-sample-based probability bound arguments, naturally leading to principled sensitivity analysis methods. We illustrate the numerical performance of our methods using empirical applications and a small simulation study. texttt{Python}, texttt{R} and texttt{Stata} software packages implementing our methodology are available.

paper research
Algebraic k-Sets and Generally Neighborly Embeddings

Algebraic k-Sets and Generally Neighborly Embeddings

Given a set $S$ of $n$ points in $ mathbb{R}^d$, a $k$-set is a subset of $k$ points of $S$ that can be strictly separated by a hyperplane from the remaining $n-k$ points. Similarly, one may consider $k$-facets, which are hyperplanes that pass through $d$ points of $S$ and have $k$ points on one side. A notorious open problem is to determine the asymptotics of the maximum number of $k$-sets. In this paper we study a variation on the $k$-set/$k$-facet problem with hyperplanes replaced by algebraic surfaces. In stark contrast to the original $k$-set/$k$-facet problem, there are some natural families of algebraic curves for which the number of $k$-facets can be counted exactly. For example, we show that the number of halving conic sections for any set of $2n+5$ points in general position in the plane is $2 binom{n+2}{2}^2$. To understand the limits of our argument we study a class of maps we call emph{generally neighborly embeddings}, which map generic point sets into neighborly position. Additionally, we give a simple argument which improves the best known bound on the number of $k$-sets/$k$-facets for point sets in convex position.

paper research
Statistical Robustness in the Chinese Remainder Theorem for Multiple Numbers

Statistical Robustness in the Chinese Remainder Theorem for Multiple Numbers

Generalized Chinese Remainder Theorem (CRT) is a well-known approach to solve ambiguity resolution related problems. In this paper, we study the robust CRT reconstruction for multiple numbers from a view of statistics. To the best of our knowledge, it is the first rigorous analysis on the underlying statistical model of CRT-based multiple parameter estimation. To address the problem, two novel approaches are established. One is to directly calculate a conditional maximum a posteriori probability (MAP) estimation of the residue clustering, and the other is based on a generalized wrapped Gaussian mixture model to iteratively search for MAP of both estimands and clustering. Residue error correcting codes are introduced to improve the robustness further. Experimental results show that the statistical schemes achieve much stronger robustness compared to state-of-the-art deterministic schemes, especially in heavy-noise scenarios.

paper research
Slither  A Static Analysis Framework for Smart Contracts

Slither A Static Analysis Framework for Smart Contracts

This paper describes Slither, a static analysis framework designed to provide rich information about Ethereum smart contracts. It works by converting Solidity smart contracts into an intermediate representation called SlithIR. SlithIR uses Static Single Assignment (SSA) form and a reduced instruction set to ease implementation of analyses while preserving semantic information that would be lost in transforming Solidity to bytecode. Slither allows for the application of commonly used program analysis techniques like dataflow and taint tracking. Our framework has four main use cases (1) automated detection of vulnerabilities, (2) automated detection of code optimization opportunities, (3) improvement of the user s understanding of the contracts, and (4) assistance with code review. In this paper, we present an overview of Slither, detail the design of its intermediate representation, and evaluate its capabilities on real-world contracts. We show that Slither s bug detection is fast, accurate, and outperforms other static analysis tools at finding issues in Ethereum smart contracts in terms of speed, robustness, and balance of detection and false positives. We compared tools using a large dataset of smart contracts and manually reviewed results for 1000 of the most used contracts.

paper research
BRISC-V  An Open-Source Toolbox for Exploring Architecture Design Spaces

BRISC-V An Open-Source Toolbox for Exploring Architecture Design Spaces

In this work, we introduce a platform for register-transfer level (RTL) architecture design space exploration. The platform is an open-source, parameterized, synthesizable set of RTL modules for designing RISC-V based single and multi-core architecture systems. The platform is designed with a high degree of modularity. It provides highly-parameterized, composable RTL modules for fast and accurate exploration of different RISC-V based core complexities, multi-level caching and memory organizations, system topologies, router architectures, and routing schemes. The platform can be used for both RTL simulation and FPGA based emulation. The hardware modules are implemented in synthesizable Verilog using no vendor-specific blocks. The platform includes a RISC-V compiler toolchain to assist in developing software for the cores, a web-based system configuration graphical user interface (GUI) and a web-based RISC-V assembly simulator. The platform supports a myriad of RISC-V architectures, ranging from a simple single cycle processor to a multi-core SoC with a complex memory hierarchy and a network-on-chip. The modules are designed to support incremental additions and modifications. The interfaces between components are particularly designed to allow parts of the processor such as whole cache modules, cores or individual pipeline stages, to be modified or replaced without impacting the rest of the system. The platform allows researchers to quickly instantiate complete working RISC-V multi-core systems with synthesizable RTL and make targeted modifications to fit their needs. The complete platform (including Verilog source code) can be downloaded at https //ascslab.org/research/briscv/explorer/explorer.html.

paper research
Average-Based Robustness for Continuous-Time Signal Temporal Logic

Average-Based Robustness for Continuous-Time Signal Temporal Logic

We propose a new robustness score for continuous-time Signal Temporal Logic (STL) specifications. Instead of considering only the most severe point along the evolution of the signal, we use average scores to extract more information from the signal, emphasizing robust satisfaction of all the specifications subformulae over their entire time interval domains. We demonstrate the advantages of this new score in falsification and control synthesis problems in systems with complex dynamics and multi-agent systems.

paper research
Resilience of Dynamic Routing Against Recurrent and Random Sensing Faults

Resilience of Dynamic Routing Against Recurrent and Random Sensing Faults

Feedback dynamic routing is a commonly used control strategy in transportation systems. This class of control strategies relies on real-time information about the traffic state in each link. However, such information may not always be observable due to temporary sensing faults. In this article, we consider dynamic routing over two parallel routes, where the sensing on each link is subject to recurrent and random faults. The faults occur and clear according to a finite-state Markov chain. When the sensing is faulty on a link, the traffic state on that link appears to be zero to the controller. Building on the theories of Markov processes and monotone dynamical systems, we derive lower and upper bounds for the resilience score, i.e. the guaranteed throughput of the network, in the face of sensing faults by establishing stability conditions for the network. We use these results to study how a variety of key parameters affect the resilience score of the network. The main conclusions are (i) Sensing faults can reduce throughput and destabilize a nominally stable network; (ii) A higher failure rate does not necessarily reduce throughput, and there may exist a worst rate that minimizes throughput; (iii) Higher correlation between the failure probabilities of two links leads to greater throughput; (iv) A large difference in capacity between two links can result in a drop in throughput.

paper research
Obsidian  Typestate and Assets for Safer Blockchain Programming

Obsidian Typestate and Assets for Safer Blockchain Programming

Blockchain platforms are coming into broad use for processing critical transactions among participants who have not established mutual trust. Many blockchains are programmable, supporting smart contracts, which maintain persistent state and support transactions that transform the state. Unfortunately, bugs in many smart contracts have been exploited by hackers. Obsidian is a novel programming language with a type system that enables static detection of bugs that are common in smart contracts today. Obsidian is based on a core calculus, Silica, for which we proved type soundness. Obsidian uses typestate to detect improper state manipulation and uses linear types to detect abuse of assets. We describe two case studies that evaluate Obsidian s applicability to the domains of parametric insurance and supply chain management, finding that Obsidian s type system facilitates reasoning about high-level states and ownership of resources. We compared our Obsidian implementation to a Solidity implementation, observing that the Solidity implementation requires much boilerplate checking and tracking of state, whereas Obsidian does this work statically.

paper research
Frequency Stability with MPC-based Inverter Control in Low-Inertia Power Systems

Frequency Stability with MPC-based Inverter Control in Low-Inertia Power Systems

The electrical grid is evolving from a network consisting of mostly synchronous machines to a mixture of synchronous machines and inverter-based resources such as wind, solar, and energy storage. This transformation has led to a decrease in mechanical inertia, which necessitate a need for the new resources to provide frequency responses through their inverter interfaces. In this paper we proposed a new strategy based on model predictive control to determine the optimal active-power set-point for inverters in the event of a disturbance in the system. Our framework explicitly takes the hard constraints in power and energy into account, and we show that it is robust to measurement noise, limited communications and delay by using an observer to estimate the model mismatches in real-time. We demonstrate the proposed controller significantly outperforms an optimally tuned virtual synchronous machine on a standard 39-bus system under a number of scenarios. In turn, this implies optimized inverter-based resources can provide better frequency responses compared to conventional synchronous machines.

paper research
Using Engineered Neurons in Digital Logic Circuits  A Study in Molecular Communications

Using Engineered Neurons in Digital Logic Circuits A Study in Molecular Communications

With the advancement of synthetic biology, several new tools have been conceptualized over the years as alternative treatments for current medical procedures. Most of those applications are applied to various chronic diseases. This work investigates how synthetically engineered neurons can operate as digital logic gates that can be used towards bio-computing for the brain. We quantify the accuracy of logic gates under high firing rates amid a network of neurons and by how much it can smooth out uncontrolled neuronal firings. To test the efficacy of our method, simulations composed of computational models of neurons connected in a structure that represents a logic gate are performed. The simulations demonstrated the accuracy of performing the correct logic operation, and how specific properties such as the firing rate can play an important role in the accuracy. As part of the analysis, the Mean squared error is used to quantify the quality of our proposed model and predicting the accurate operation of a gate based on different sampling frequencies. As an application, the logic gates were used to trap epileptic seizures in a neuronal network, where the results demonstrated the effectiveness of reducing the firing rate. Our proposed system has the potential for computing numerous neurological conditions of the brain.

paper research
Unlimited Budget Analysis of Randomized Search Heuristics

Unlimited Budget Analysis of Randomized Search Heuristics

Performance analysis of all kinds of randomised search heuristics is a rapidly growing and developing field. Run time and solution quality are two popular measures of the performance of these algorithms. The focus of this paper is on the solution quality an optimisation heuristic achieves, not on the time it takes to reach this goal, setting it far apart from runtime analysis. We contribute to its further development by introducing a novel analytical framework, called unlimited budget analysis, to derive the expected fitness value after arbitrary computational steps. It has its roots in the very recently introduced approximation error analysis and bears some similarity to fixed budget analysis. We present the framework, apply it to simple mutation-based algorithms, covering both, local and global search. We provide analytical results for a number of pseudo-Boolean functions for unlimited budget analysis and compare them to results derived within the fixed budget framework for the same algorithms and functions. There are also results of experiments to compare bounds obtained in the two different frameworks with the actual observed performance. The study show that unlimited budget analysis may lead to the same or more general estimation beyond fixed budget.

paper research
The DUNE Framework  Core Principles and Recent Advances

The DUNE Framework Core Principles and Recent Advances

This paper presents the basic concepts and the module structure of the Distributed and Unified Numerics Environment and reflects on recent developments and general changes that happened since the release of the first Dune version in 2007 and the main papers describing that state [1, 2]. This discussion is accompanied with a description of various advanced features, such as coupling of domains and cut cells, grid modifications such as adaptation and moving domains, high order discretizations and node level performance, non-smooth multigrid methods, and multiscale methods. A brief discussion on current and future development directions of the framework concludes the paper.

paper research
Multi-Objective Evolutionary Approach for Grey-Box Identification of a Buck Converter

Multi-Objective Evolutionary Approach for Grey-Box Identification of a Buck Converter

The present study proposes a simple grey-box identification approach to model a real DC-DC buck converter operating in continuous conduction mode. The problem associated with the information void in the observed dynamical data, which is often obtained over a relatively narrow input range, is alleviated by exploiting the known static behavior of buck converter as a priori knowledge. A simple method is developed based on the concept of term clusters to determine the static response of the candidate models. The error in the static behavior is then directly embedded into the multi-objective framework for structure selection. In essence, the proposed approach casts grey-box identification problem into a multi-objective framework to balance bias-variance dilemma of model building while explicitly integrating a priori knowledge into the structure selection process. The results of the investigation, considering the case of practical buck converter, demonstrate that it is possible to identify parsimonious models which can capture both the dynamic and static behavior of the system over a wide input range.

paper research
Few-Shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor

Few-Shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor

Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci et al., 2019). Gradient-based learning requires iterating several times over a dataset, which is both time-consuming and constrains the training samples to be independently and identically distributed. This is incompatible with learning systems that do not have boundaries between training and inference, such as in neuromorphic hardware. One approach to overcome these constraints is transfer learning, where a portion of the network is pre-trained and mapped into hardware and the remaining portion is trained online. Transfer learning has the advantage that pre-training can be accelerated offline if the task domain is known, and few samples of each class are sufficient for learning the target task at reasonable accuracies. Here, we demonstrate on-line surrogate gradient few-shot learning on Intel s Loihi neuromorphic research processor using features pre-trained with spike-based gradient backpropagation-through-time. Our experimental results show that the Loihi chip can learn gestures online using a small number of shots and achieve results that are comparable to the models simulated on a conventional processor.

paper research
Hash-Based Ray Path Prediction  Avoiding BVH Traversal by Utilizing Ray Locality

Hash-Based Ray Path Prediction Avoiding BVH Traversal by Utilizing Ray Locality

State-of-the-art ray tracing techniques operate on hierarchical acceleration structures such as BVH trees which wrap objects in a scene into bounding volumes of decreasing sizes. Acceleration structures reduce the amount of ray-scene intersections that a ray has to perform to find the intersecting object. However, we observe a large amount of redundancy when rays are traversing these acceleration structures. While modern acceleration structures explore the spatial organization of the scene, they neglect similarities between rays that traverse the structures and thereby cause redundant traversals. This paper provides a limit study of a new promising technique, Hash-Based Ray Path Prediction (HRPP), which exploits the similarity between rays to predict leaf nodes to avoid redundant acceleration structure traversals. Our data shows that acceleration structure traversal consumes a significant proportion of the ray tracing rendering time regardless of the platform or the target image quality. Our study quantifies unused ray locality and evaluates the theoretical potential for improved ray traversal performance for both coherent and seemingly incoherent rays. We show that HRPP is able to skip, on average, 40% of all hit-all traversal computations.

paper research

< Category Statistics (Total: 566) >

Quantum Physics
5

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut