Reading time: 22 minute
...

📝 Original Info

  • Title:
  • ArXiv ID: 2512.21201
  • Date:
  • Authors: Unknown

📝 Abstract

📄 Full Content

Zero-shot object navigation (ZSON) requires a robot to locate a target object in a previously unseen environment without relying on pre-built maps or task-specific training. However, existing ZSON methods often struggle in realistic and cluttered environments, particularly when the scene contains heavy occlusions, unknown risks, or dynamically moving target objects. To address these challenges, we propose Schrödinger's Navigator, a navigation framework inspired by Schrödinger's thought experiment on uncertainty.

The framework treats unobserved space as a set of plausible future worlds and reasons over them before acting. Condi-tioned on egocentric visual inputs and three candidate trajectories, a trajectory-conditioned 3D world model imagines future observations along each path. This enables the agent to see beyond occlusions and anticipate risks in unseen regions without requiring extra detours or dense global mapping. The imagined 3D observations are fused into the navigation map and used to update a value map. These updates guide the policy toward trajectories that avoid occlusions, reduce exposure to uncertain space, and better track moving targets. Experiments on a Go2 quadruped robot across three challenging scenarios, including severe static occlusions, unknown risks, and dynamically moving targets, show that Schrödinger’s Navigator consistently outperforms strong ZSON baselines in self-localization, object localization, and overall Success Rate in occlusion-heavy environments. These results demonstrate the effectiveness

Object navigation is a fundamental capability for mobile robots operating in real-world environments [1,4,6]. To be effective in practical applications such as service robotics or household assistance, agents must be capable of searching for target objects in previously unseen environments without relying on pre-built maps [5,8,18] or extensive task-specific retraining for each new setting [32,49]. Zero-shot object navigation (ZSON) formalizes this requirement by tasking a robot with finding a specified object in a novel environment without any task-specific finetuning [10,24]. Although recent ZSON methods have shown promising results in simulation and simplified settings, their performance often degrades in realistic and cluttered environments, where the robot must handle heavy occlusions, unknown risks, and dynamically moving target objects [3,10,36,41,46]. In such environments, the robot’s perception of the world is inherently partial and uncertain. Substantial portions of the scene remain unobserved behind obstacles, as shown in Figure 1 where the cat is occluded by the table. In addition, potential hazards or targets may appear or disappear as the robot moves. Existing ZSON methods typically struggle under these conditions. They often fail when the target object is hidden behind severe static occlusions, when the environment contains unknown risks, or when the object moves during the navigation episode. These failures highlight a fundamental limitation. Current approaches do not explicitly reason about multiple plausible configurations of unobserved space before acting. Consequently, they are easily misled by local observations in cluttered, occlusionheavy environments [9,28,29].

To address these challenges, we draw inspiration from Schrödinger’s thought experiment on uncertainty and propose Schrödinger’s Navigator, a principled navigation framework that treats unobserved space as a set of plausible future worlds and reasons over them before committing to an action, as illustrated in Figure 1. Unlike prior approaches [6,9,32] that assume a single fixed completion of the partially observed environment, Schrödinger’s Navigator explicitly imagines how the world could appear along multiple candidate trajectories and uses these imagined futures to inform decision-making. This enables the agent to plan as if it were “seeing beyond” current occlusions, anticipating risks and target motions in regions that have not yet been directly observed.

At its core, Schrödinger’s Navigator utilizes a trajectoryconditioned 3D world model [2]. The model receives egocentric visual observations and candidate trajectories as in-put. It generates predicted future observations along each trajectory and produces hypothetical 3D views representing what the agent would perceive if it followed that path. To balance the coverage of a representative action space with computational efficiency, we sample three candidate trajectories at each planning step. The resulting 3D future observations are aligned, fused, and integrated into an augmented navigation map. This map extends the robot’s representation beyond the directly visible environment. This augmented map is subsequently used to update a value map, which guides the navigation policy toward trajectories that mitigate occlusions, reduce exposure to uncertain regions, and improve tracking of moving targets. In this way, Schrödinger’s Navigator exploits trajectory-conditioned 3D imagination to reason about occluded and risky spaces without requiring dense global mapping or additional detours.

We evaluate our Schrödinger’s Navigator on a Go2 quadruped robot across three challenging real-world scenarios that involve severe static occlusions, latent hazards, and dynamically moving targets. In all settings, our system demonstrates stable and reliable performance, consistently outperforming strong ZSON baselines in self-localization, object localization, and overall task success in cluttered, occlusion-heavy environments. These results indicate that a mature, inference-time-only pipeline that explicitly reasons over imagined 3D futures along candidate trajectories provides a robust and generalizable foundation for zero-shot object navigation under uncertain real-world conditions.

Our contributions are summarized as follows: • We propose Schrödinger’s Navigator, a zero-shot object navigation framework that treats unobserved space as a set of plausible future worlds and reasons over them before acting. This approach enables the agent to see beyond occlusions, anticipate risks, and better handle dynamically moving targets in cluttered environments. • We utilize a trajectory-conditioned 3D world model that, given egocentric visual inputs and three candidate trajectories, imagines 3D future observations along each path. These observations are then aligned and fused into the navigation map, which is used to update a value map that guides the policy toward safer, less uncertain, and more target-aware trajectories.

Object Navigation. Object navigation (ObjectNav) [11] requires an embodied agent to operate in a previously unseen environment and locate a target object identified solely by its category name. Existing approaches can be broadly categorized into two families. The first family comprises task-trained methods, including reinforcement learning and imitation learning [6,25,31,32]. These methods rely on large-scale training in task-specific environments, and their generalization is often constrained by the diversity of training data. Consequently, they struggle to maintain robust performance in complex real-world scenes and encounter significant challenges in sim-to-real transfer for deployment on physical robots. The second family comprises zero-shot methods, which leverage pretrained vision-language models (VLMs) [10,19,24] or large language models (LLMs) [33,40,48] that provide strong zero-shot generalization and open-world semantic knowledge [10,33,41]. These methods formulate navigation as a reasoning and planning problem and can directly perform ObjectNav without additional task-specific training. Recent work on zero-shot ObjectNav primarily focuses on integrating pretrained semantic knowledge and reasoning into embodied navigation. These methods progressively enhance semanticsdriven exploration and planning through multimodal target embeddings [10,24], vision-language frontier maps [41], instruction-based prompting [23,48], and adaptive fusion of semantic and geometric cues [7,12,17,44,45]. Nevertheless, these methods continue to struggle in realistic, cluttered environments, particularly when the scene involves severe occlusions [10], unknown risks [41], or dynamically moving target objects [9].

Imagination for Navigation. Imagination-based navigation leverages generative or predictive models to simulate future observations and inform decision-making [2,16,27].

Early model-based RL and world-model approaches learn predictive dynamics models, rolling out trajectories in latent space rather than the real environment to train policies [21,37]. Building on this, Navigation World Models (NWM) [2] use a conditional diffusion transformer on egocentric videos to predict future trajectories in pixel space and rank paths, while NavigateDiff [27] employs a diffusion-based visual predictor as a zero-shot navigation assistant. Perincherry et al. [26] generate text-conditioned images for intermediate landmarks as auxiliary cues, and related methods like VISTA [13] align language instructions with predicted views or retrieve experiences via imagined observations. Other works learn scene imagination modules or predictive occupancy maps to complete unobserved spaces and aid exploration [15,22,34,35]. Different from these works, our Schrödinger’s Navigator employs a trajectory-conditioned 3D world model to imagine future observations along multiple candidate paths, fuses the imagined geometry and semantics into the navigation map, and updates a value map to explicitly reason about occlusions and unknown risks, yielding a 3D, uncertaintyaware realization of imagination-based navigation.

We introduce Schrödinger’s Navigator (Figure 2) that handles occluded uncertainties by imagining future scenes along candidate trajectories.

We study zero-shot object navigation (ZSON) in previously unseen 3D environments with heavy occlusions and dynamic obstacles. An embodied agent operates in an environment E and is given a goal instruction I that specifies a target object category (e.g., “Finding the cat”). At each decision step t, the agent is at an unknown global state x t ∈ X but only has access to an egocentric observation

where V t is the current RGB image, D t is the depth map from the onboard RGB-D sensor, and P t is the robot pose in the world coordinate frame. Large portions of the environment, including the target object and potential hazards, may lie in unobserved or occluded regions that are not directly visible in O t . An episode terminates successfully when the target object is within a small distance threshold and lies in the robot’s field of view, or ends in failure if a maximum step budget is exceeded or the robot enters unsafe regions. Under this setting, our goal is to design a navigation framework that can reason about unobserved space, infer plausible futures behind occluders, and select safe, informative trajectories that drive the robot toward successful object discovery in the complex real world.

Tri-Trajectory Generation. Our ultimate goal is to generate trajectories that both avoid obstacles and successfully locate the target object. Therefore, when using a world model to assist navigation, we first construct several obstacle-bypassing trajectories and then use each trajectory as a condition for the world model, guiding it to generate plausible imaginations that respect obstacle avoidance. In regions with large obstacles or dynamic objects, imagining only one single plausible path often risks overlooking a target occluded by the obstacles or failing to anticipate potential risks brought by dynamic objects. To make the imagined outcomes more predictive, we select three candidate trajectories, along which cameras orbit around the obstacle:

(1) a left-bypass path, (2) a rightbypass path, and (3) an over-thetop path. This trajectory selection plan ensures sufficient coverage of occluded areas while maintaining an acceptable computational budget, preventing excessive latency.

Find the .

where K is the intrinsic parameter, N is the number of cameras, d v is the total length of trajectory v, and d c is the distance between the camera center and the orbit center.

Given each generated trajectory T (v) , we use a world model to generate future imaginations that are geometrically consistent. We adopt FlashWorld [20] as the backend for future scene imagination due to its ability to produce high-quality, 3D-consistent 3D Gaussian Splatting (3DGS) scenes within seconds. However, FlashWorld is an affine-invariant world model, which means it cannot generate scenes aligned with the metric scale of the current environment. To ensure that the generated scene can be both of high quality and metrically consistent with the current environment, we apply a two-step alignment: (1) Coordinate System Transformation and (2) Global Scale Alignment.

Coordinate System Transformation. To generate highquality future scenes, we construct a local coordinate system Ŵ centered at the current observation frame to match the generated trajectories to the trajectory distribution preferred by the world model as closely as possible. After generating the future scene, we transform the scene from the local coordinate system Ŵ back to the global world coordinate system W. The transformation matrix T Ŵ→W is

where T Ŵ denotes the pose of the current observation frame under the local coordinate system Ŵ and T W denotes the one under the world coordinate system W. Global Scale Alignment. To merge the generated scene back into the original environment, we estimate a global scale factor s that aligns the scale of the generated scene with the metric scale. Specifically, given the metric depth D gt (p) of the current observation obtained from the RGB-D camera and the rendered depth D gs of the corresponding frame in the generated scene, we compute s as follows:

where p denotes a pixel location in the image plane and Ω denotes the set of valid pixels for which both the metric depth D gt (p) and the generated-scene depth D render (p) are available. The ratio between the median of the metric depth and the median of the rendered depth over Ω provides a robust estimate of the global scale. Semantic Label Transfer. To enrich aligned 3DGS scene with semantic information, we lift 2D semantic predictions from the image plane to the Gaussian primitives. Given the Let x i ∈ R 3 denote the 3D center of the i-th Gaussian in the (scale-aligned) camera coordinate frame, and let π(•) be the pinhole projection function. We project each Gaussian center onto the image plane as

If p i lies within the image bounds and admits a valid semantic prediction, we assign the corresponding pixel-level label to the Gaussian by

where ℓ i denotes the semantic label stored in the label field of the i-th Gaussian.

To suppress spurious assignments from occluded or invalid projections, we further restrict the transfer to Gaussians whose projected pixels fall inside the valid depth region Ω and satisfy a depth-consistency check with the rendered 3DGS depth, e.g., |D gs (p i ) -D gt (p i )| < τ d . In practice, we accumulate such assignments across multiple views and fuse them (e.g., via majority voting) to obtain a robust semantic label for each Gaussian. This projection-andtransfer procedure yields a semantically annotated 3DGS scene, where each visible Gaussian primitive carries a semantic category inherited from 2D segmentation.

Overview of Navigation Pipeline. As in InstructNav [23], we first apply a large language model (LLM) to convert the natural language instruction I into a time-evolving sequence of action-landmark pairs (DCoN). At each decision step t, given the current observation O t = {V t , D t , P t } and the accumulated plan C 1:t , the LLM predicts the next pair (a t+1 , ℓ t+1 ):

where C 1:t denotes the action-landmark plan up to step t.

Next, the language-level DCoN is grounded into an executable trajectory via Multi-sourced Value Maps. Specifically, we fuse the action preference map m a , semantic landmark map m s , trajectory suppression map m t , and heuristic guidance map m i to form the decision map

While this multi-sourced value map m provides a strong guidance signal from the current observation, it is inherently myopic and cannot explicitly reason about targets or risks that are fully occluded in the unobserved space. To mitigate this limitation, we further incorporate imagined future observations from a trajectory-conditioned 3D world model.

To move beyond purely myopic, observation-only decisions based on m, we augment the navigation pipeline with a future-aware value map constructed from imagined 3DGS scenes. After obtaining the 3DGS scenes generated by the trajectory-conditioned world model and their corresponding semantic segmentation, we update global sets of navigable Gaussians G nav and semantic Gaussians G sem . Unlike conventional 3D Gaussian representations, we encode each Gaussian as a nine-dimensional vector g = [x, y, z, r, g, b, rad, opa, label], which substantially reduces memory footprint and accelerates downstream processing while preserving sufficient expressive power. These augmented maps extend the currently observed scene with hypothesized free space and semantic hypotheses behind occluders. Then we define a future-aware value map m FA that directly scores each navigable Gaussian by jointly accounting for semantic relevance and information gain.

For each navigable Gaussian g ∈ G nav with 3D center

where S(g) is a semantic score, E(g) is an exploration score, and α sem , α exp > 0 are weighting coefficients. Semantic score S(g): target proximity. We focus on Gaussians whose semantic labels match the target category (e.g., cat, table, door). Let T real denote target Gaussians obtained from direct observations and T hyp denote target-like Gaussians hypothesized by the world model (e.g., a cat inferred to be behind a table). For any set S, we define the distance from g to S as d(g, S) = min

The semantic score S(g) is designed to increase when g is closer to either real or hypothesized targets. We additionally apply a discount factor λ sem < 1 to T hyp so that imagined targets contribute less than directly observed ones.

Exploration score E(g): coverage of new free space. Let F new ⊂ G nav denote Gaussians corresponding to free space predicted by the world model but not yet observed. For each candidate g, we consider a visibility radius r vis and count how many newly predicted free Gaussians lie in its local neighborhood:

We then normalize Ẽ(g) over all candidates to obtain E(g) ∈ [0, 1]. Intuitively, positions that reveal more previously unseen yet likely free regions receive higher exploration scores. By combining semantic proximity to both real and hypothesized targets with the potential to uncover new free space, m FA complements the original multi-sourced map m with explicitly future-aware reasoning. In the next subsection, we show how to fuse these two signals into a single decision affordance map.

The multi-sourced value map m and the future-aware value map m FA capture complementary information: the former focuses on current-step cues, while the latter encodes semantic and exploratory value inferred from imagined futures. The future-aware value map m FA is defined over the same domain as the original multi-sourced value map m. We fuse them into a final decision affordance map

where β ∈ [0, 1] balances current-step evidence and futureaware reasoning. In the target selection step, we simply replace m with m aff :

and feed p * to the local/global planner. This yields a compact, single future-aware map that simultaneously encodes semantic goal guidance, information gain, and safety.

In this section, we evaluate the effectiveness and practicality of Schrödinger’s Navigator, detailing the experimental setup and validation criteria (Sec. interpret and execute a broad spectrum of natural language commands. To ensure statistical reliability and minimize the influence of random environmental factors, each task was repeated five times. This comprehensive evaluation framework allows for a thorough assessment of the agent’s generalization, robustness, and ability to handle complex, language-guided navigation challenges in realistic indoor scenarios.

For quantitative evaluation, we adopt the Success Rate (SR) as the primary performance metric. A navigation episode is deemed successful only if two strict conditions are met:

(1) the agent successfully executes the full sequence of instructions, and (2) the final Euclidean distance between the agent’s position and the predefined target object is less than 0.5 meters. This stringent metric ensures that successful task completion requires both correct path planning and precise, accurate object localization. Hardware Platform. As shown in Figure 5, our real-world experiment relies on the Unitree Go2 mobile robot platform, chosen for its agility and compact form factor suitable for indoor deployment. The robot is equipped with a RealSense D435i camera which serves as the primary sensing modality, providing synchronized RGB images, depth maps, and IMU data. All inference processes are executed on a single NVIDIA H800 GPU. System Configuration. The navigation system is deployed on a remote server, facilitating reliable communication with the Go2 robot via a FastAPI interface. The robot is initialized with its native obstacle avoidance system enabled. The server-side processing pipeline handles high-level perception and cognition: we utilize GLEE [38] for semantic segmentation of the visual stream. For high-level reasoning and planning, we employ GPT-4o. This model is responsible for interpreting complex goals to plan dynamic navigation chains and visually assessing potential routes to judge navigation directions. All large model parameters are kept at the OpenAI default settings. The generated navigation commands, such as basic motion instructions and specific path point tracking, are sent to the Go2 Onboard Jetson Orin system via HTTP POST requests. Data Processing Pipeline. The raw sensor data from the RealSense camera undergoes a structured processing pipeline. RGB and depth data streams are transmitted from the robot to the server as Base64-encoded strings. These strings are subsequently decoded into numerical NumPy arrays for efficient processing. The RealSense D435i is configured to capture 640 × 480 resolution images, providing a 69.4 • horizontal field-of-view. In parallel, IMU data, such as accelerometer and gyroscope reading, is serialized and transmitted in JSON format.

We rigorously evaluate effectiveness of our Navigator against the established baseline InstructNav [23], the only existing and open-sourced zero-shot object navigation system capable of handling language-guided and open-world tasks in real-world environments. Our real-world validation spans three challenging task categories designed to assess core competency and dynamic adaptability: (1) searching for static objects, (2) searching for dynamic objects, and (3) navigating in the presence of sudden obstacles. Quantitative performance, summarized as success counts over ten trials per environment (Office, Classroom, Common Room), is presented in Table 1.

Quantitative results in Table 1 reveal a clear and significant performance advantage for our method. Overall, this performance differential is primarily attributable to our system’s superior handling of dynamic elements and environmental stochasticity. While our method achieves comparable performance in the “Search for static objects” task (23/30 vs. 22/30), its capabilities truly diverge in more com- plex, unpredictable scenarios. For “Search for dynamic objects,” our system succeeds in 16/30 trials versus the baseline’s 10/30. This advantage is even more pronounced in the “Sudden Obstacles” task, where our method achieves a 19/30 success rate, compared to a mere 12/30 for Instruct-Nav, whose performance degrades markedly under dynamic conditions. These quantitative findings are supported by our qualitative results shown in Figure 6, Figure 7 and Figure 8. While Figure 6 demonstrates our method’s competency in diverse static scenes, Figure 7 and Figure 8 provide direct visual evidence of our key advantages. They respectively showcase successful dynamic target pursuit and real-time replanning in response to emergent obstacles, validating the practical efficacy of our approach. Future Works. As future work, our framework could be extended beyond the current three canonical trajectories and specific world model backend to incorporate richer trajectory ensembles, more scalable 3D generative models, and larger-scale evaluations in both simulation, outdoor and real-world environments. We believe that treating unobserved space as an ensemble of plausible futures and grounding imagination in 3D geometry provides a promising path toward robust, uncertainty-aware embodied navigation in complex real-world settings.

Our Navigator addresses a core limitation of existing zeroshot object navigation systems: their inability to reason about heavily occluded and uncertain regions in cluttered environments. By combining a tri-trajectory sampler with a trajectory-conditioned 3D world model, our framework imagines multiple plausible 3D futures along candidate paths and fuses them into a unified 3DGS scene. These imagined observations are integrated into a multi-sourced navigation map and a future-aware value map, producing a single affordance map that encodes semantic goals, information gain, and safety, and can be used with standard planners without task-specific retraining.

Real-world experiments on a Unitree Go2 quadruped across diverse indoor scenes show that Schrödinger’s Navigator not only matches strong zero-shot baselines on static object search, but significantly improves performance in scenarios with dynamic targets and sudden obstacles, where reasoning over imagined 3D futures is crucial.

Schrödinger’s Navigator: Imagining an Ensemble of Futures for Zero-Shot Object Navigation Supplementary Material

We present qualitative examples in both simulated and realworld environments in the attached video. In simulation, we select several challenging cases characterized by severe visual occlusions. For real-world evaluation, we show three indoor scenes: office, classroom, and common room. In the classroom, we further demonstrate robustness under two types of dynamic conditions:

• moving objects, such as a chair in motion.

• sudden occlusion caused by a chair abruptly entering the view. Please refer to the video for detailed demonstrations.

In this section, we provide the additional experimental details, mainly including settings of parameters. FlashWorld Parameters. We follow the default parameter settings of FlashWorld, as detailed below:

• Image resolution: 480 × 704.

• Key frames: 24.

• Frame rate: 15 fps. Navigation Pipeline Parameters. The Table 2 lists the parameters used in the future-aware value map and the resulting affordance map.

In the main paper, we focus on real-world evaluations on the Unitree Go2 platform to highlight the practical effectiveness of our method in cluttered indoor environments with complex occlusions. To provide a more comprehensive and controlled assessment, we additionally conduct extensive experiments in the Habitat simulator, where we compare against prior state-of-the-art baselines across multiple quantitative metrics.

Datasets. All experiments are performed in the Habitat simulator using the HM3D benchmark [30], a large-scale, photorealistic dataset of indoor 3D environments. HM3D comprises 36 meticulously reconstructed scenes spanning residential and commercial spaces, with high geometric fidelity and dense visual textures. Following standard protocols, we evaluate across 1,000 navigation episodes covering six commonly used target categories. The resulting setup provides a diverse and challenging testbed for benchmarking embodied navigation under realistic visual, geometric, and semantic variations. Evaluation Metrics. Following standard practice in objectgoal navigation, we adopt three widely used metrics from the Habitat evaluation protocol [1]. (1) Success Rate (SR): The fraction of episodes in which the agent stops within a fixed tolerance (typically d ≤ τ meters) of the target object. (2) Success weighted by Path Length (SPL): A pathefficiency-aware metric defined as

, where S i is the binary success indicator, L ⋆ i is the geodesic shortest-path distance, and L i is the length of the executed trajectory. (3) Distance to Goal (DTG): The geodesic distance between the agent’s final position and the target object at episode termination, regardless of success. This metric reflects residual navigation error and complements SR/SPL by capturing near-success cases. Implementation Details. For the textual planner, we use GPT-4o [14] to understand high-level human goal instructions and make spatial decisions. For the visual judge, we also use GPT-4o to judge multi-view panoramic images. We use GLEE for object detection and semantic segmentation. Each robot agent is equipped with an egocentric RGB camera with a resolution of 300 × 300 and a HFoV of 90 • . All systems and experiments are conducted on a single compute node with two NVIDIA RTX 4090 GPUs. Baselines. We compare our approach against a broad set of strong baselines. The first group -ZSON [24], PixNav [3], SPNet [47], and SGM [45] -relies on task-specific training, which limits their ability to generalize in zero-shot settings. The second group consists of methods that can be further divided into several families. CoW [10] adopts a purely geometric nearest-frontier exploration strategy without semantic reasoning. ESC [48], L3MVN [42], and Tri-Helper [43] improve exploration by first constructing semantic maps and then using LLMs to select promising frontiers based on semantic cues. VoroNav [39] regularizes exploration by generating frontiers from a Voronoi partition of free space, encouraging more structured coverage, while GAMap [12] learns a Gaussian-style value/affordance map to prioritize frontiers that are more likely to contain the target. VLFM [41] and InstructNav [23] go one step further by leveraging LLMs or VLMs to directly produce value We compare our method with state-of-the-art object goal navigation models on HM3D datasets. Results are summarized in the Table 3. Our method achieves the best performance in terms of DTG, indicating that our future-aware value map effectively guides the agent closer to the target object. While ApexNav and CogNav achieve higher SR and SPL, they rely on more complex planning mechanisms and object-centric maps, whereas our approach maintains a simpler and more efficient pipeline. Overall, our method demonstrates competitive performance in zero-shot object goal navigation tasks within simulated environments.

The Figure 9 illustrates the 3DGS scenes generated by the FlashWorld and the simplified 3DGS representation used for planning. The left three columns show the trajectoryconditioned Gaussians imagined from the current observation along three predefined trajectories (left, right, and up). The rightmost column shows the result in simplified representation after downsampling and merging, which substantially reduces the number of Gaussians while preserving the global geometric and semantic structure, and can serve as the input to our future-aware value map and affordance map.

Schrödinger’s NavigatorI will sample three trajectories around the table to find you. I find you.

Schrödinger’s Navigator

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut