今日速览 · AI 导读

24 小时跨学科精选

自动抓取 arXiv 多学科 RSS,DeepSeek 自动润色标题与摘要,最快 24 小时内汇总。

AI 导读

今日看点(自动摘要):q-bio: Noninvasive rheological inference from stable flows in confined tissues;q-bio: While recognizing actions, LMMs struggle to detect core interaction events;q-bio: MoRE: Batch-Robust Multi-Omics Representations from Frozen Pre-trained Transformers

速览说明

数据源:arXiv 官方 RSS(physics / math / cs / q-bio / econ / astro-ph 等)。

标题与摘要由 DeepSeek 进行中文润色,便于快速浏览;外链跳转至原文。

AI 速览助手:点击卡片“速览全文”可拉取 arXiv HTML,生成全文要点并缓存;右下角悬浮按钮可随时展开/收起速览。

自动抓取:每日 14:00 初始化批次并每 15 分钟轮询学科,24h 无数据则回退 72h/7 天;arXiv 周末停更时自动跳过。

往期回顾:点击上方“往期回顾”进入日历页,按月查看每日抓取总数,并跳转到对应日期速览。

2025-11-26 速览

2025-11-26 共 143 条抓取,按综合热度排序

← 返回日历
q-bio q-bio 11-26 00:00

Noninvasive rheological inference from stable flows in confined tissues

arXiv:2511.20155v1 Announce Type: cross Abstract: Quantifying the in-plane rheology of epithelial monolayers remains challenging due to the difficulty of imposing controlled shear. We introduce a self-driven, rheometer-like assay in which collective migration generates stationary shear flows, allowing rheological parameters to be inferred directly from image sequences. The assay relies on two sets of ring-shaped fibronectin patches, micropatterned in arrays for high-throughput imaging. Within isolated rings, the epithelial tissue exhibits persistent rotation, from which we infer active migration stresses and substrate friction. Within partially overlapping rings, the tissue exhibits sustained shear, from which we infer the elastic and viscous responses of the cells. The emergence of a Maxwell-like viscoelastic relation --characterized by a linear relationship between mean cell deformation and neighbor-exchange rate-- is specifically recapitulated within a wet vertex-model framework, which reproduces experimental measurements only when intercellular viscous dissipation is included alongside substrate friction. We apply our method to discriminate the respective roles of two myosin~II isoforms in tissue mechanics. Overall, by harnessing self-generated stresses instead of externally imposed ones, we propose a noninvasive route to rheological inference in migrating epithelial tissues and, more generally, in actively flowing granular materials.

cond-mat.softq-bio.to
q-bio q-bio 11-26 00:00

While recognizing actions, LMMs struggle to detect core interaction events

arXiv:2511.20162v1 Announce Type: cross Abstract: Large multi-modal models (LMMs) show increasing performance in realistic visual tasks for images and, more recently, for videos. For example, given a video sequence, such models are able to describe in detail objects, the surroundings and dynamic actions. In this study, we explored the extent to which these models ground their semantic understanding in the actual visual input. Specifically, given sequences of hands interacting with objects, we asked models when and where the interaction begins or ends. For this purpose, we introduce a first of its kind, large-scale dataset with more than 20K annotated interactions on videos from the Something-Something-V2 dataset. 250 AMTurk human annotators labeled core interaction events, particularly when and where objects and agents become attached ('contact') or detached ('release'). We asked two LMMs (Qwen-2.5VL and GPT-4o) to locate these events in short videos, each with a single event. The results show that although the models can reliably name the target objects, identify the action and provide coherent reasoning, they consistently fail to identify the frame where the interaction begins or ends and cannot localize the event within the scene. Our findings suggest that in struggling to pinpoint the moment and location of physical contact that defines the interaction, the models lack the perceptual grounding required for deeper understanding of dynamic scenes.

q-bio.nccs.cvcs.ai
q-bio q-bio 11-26 00:00

MoRE: Batch-Robust Multi-Omics Representations from Frozen Pre-trained Transformers

arXiv:2511.20382v1 Announce Type: cross Abstract: Representation learning on multi-omics data is challenging due to extreme dimensionality, modality heterogeneity, and cohort-specific batch effects. While pre-trained transformer backbones have shown broad generalization capabilities in biological sequence modeling, their application to multi-omics integration remains underexplored. We present MoRE (Multi-Omics Representation Embedding), a framework that repurposes frozen pre-trained transformers to align heterogeneous assays into a shared latent space. Unlike purely generative approaches, MoRE employs a parameter-efficient fine-tuning (PEFT) strategy, prioritizing cross-sample and cross-modality alignment over simple sequence reconstruction. Specifically, MoRE attaches lightweight, modality-specific adapters and a task-adaptive fusion layer to the frozen backbone. It optimizes a masked modeling objective jointly with supervised contrastive and batch-invariant alignment losses, yielding structure-preserving embeddings that generalize across unseen cell types and platforms. We benchmark MoRE against established baselines, including scGPT, scVI, and Harmony with scArches, evaluating integration fidelity, rare population detection, and modality transfer. Our results demonstrate that MoRE achieves competitive batch robustness and biological conservation while significantly reducing trainable parameters compared to fully fine-tuned models. This work positions MoRE as a practical step toward general-purpose omics foundation models.

cs.lgq-bio.gn
q-bio q-bio 11-26 00:00

Development of a fully deep learning model to improve the reproducibility of sector classification systems for predicting unerupted maxillary canine likelihood of impaction

arXiv:2511.20493v1 Announce Type: cross Abstract: Objectives. The aim of the present study was to develop a fully deep learning model to reduce the intra- and inter-operator reproducibility of sector classification systems for predicting unerupted maxillary canine likelihood of impaction. Methods. Three orthodontists (Os) and three general dental practitioners (GDPs) classified the position of unerupted maxillary canines on 306 radiographs (T0) according to the three different sector classification systems (5-, 4-, and 3-sector classification system). The assessment was repeated after four weeks (T1). Intra- and inter-observer agreement were evaluated with Cohen's K and Fleiss K, and between group differences with a z-test. The same radiographs were tested on different artificial intelligence (AI) models, pre-trained on an extended dataset of 1,222 radiographs. The best-performing model was identified based on its sensitivity and precision. Results. The 3-sector system was found to be the classification method with highest reproducibility, with an agreement (Cohen's K values) between observations (T0 versus T1) for each examiner ranged from 0.80 to 0.92, and an overall agreement of 0.85 [95% confidence interval (CI) = 0.83-0.87]. The overall inter-observer agreement (Fleiss K) ranged from 0.69 to 0.7. The educational background did not affect either intra- or inter-observer agreement (p>0.05). DenseNet121 proved to be the best-performing model in allocating impacted canines in the three different classes, with an overall accuracy of 76.8%. Conclusion. AI models can be designed to automatically classify the position of unerupted maxillary canines.

eess.ivcs.cvq-bio.qm
q-bio q-bio 11-26 00:00

Mamba-based Deep Learning Approach for Sleep Staging on a Wireless Multimodal Wearable System without Electroencephalography

arXiv:2412.15947v4 Announce Type: replace Abstract: Study Objectives: We investigate a Mamba-based deep learning approach for sleep staging on signals from ANNE One (Sibel Health, Evanston, IL), a non-intrusive dual-module wireless wearable system measuring chest electrocardiography (ECG), triaxial accelerometry, and chest temperature, and finger photoplethysmography and finger temperature. Methods: We obtained wearable sensor recordings from 357 adults undergoing concurrent polysomnography (PSG) at a tertiary care sleep lab. Each PSG recording was manually scored and these annotations served as ground truth labels for training and evaluation of our models. PSG and wearable sensor data were automatically aligned using their ECG channels with manual confirmation by visual inspection. We trained a Mamba-based recurrent neural network architecture on these recordings. Ensembling of model variants with similar architectures was performed. Results: After ensembling, the model attains a 3-class (wake, non rapid eye movement [NREM] sleep, rapid eye movement [REM] sleep) balanced accuracy of 84.02%, F1 score of 84.23%, Cohen's $\kappa$ of 72.89%, and a Matthews correlation coefficient (MCC) score of 73.00%; a 4-class (wake, light NREM [N1/N2], deep NREM [N3], REM) balanced accuracy of 75.30%, F1 score of 74.10%, Cohen's $\kappa$ of 61.51%, and MCC score of 61.95%; a 5-class (wake, N1, N2, N3, REM) balanced accuracy of 65.11%, F1 score of 66.15%, Cohen's $\kappa$ of 53.23%, MCC score of 54.38%. Conclusions: Our Mamba-based deep learning model can successfully infer major sleep stages from the ANNE One, a wearable system without electroencephalography (EEG), and can be applied to data from adults attending a tertiary care sleep clinic.

cs.lgq-bio.qm
q-bio q-bio 11-26 00:00

Deep learning and whole-brain networks for biomarker discovery: modeling the dynamics of brain fluctuations in resting-state and cognitive tasks

arXiv:2412.19329v2 Announce Type: replace Abstract: Background: Brain network models offer insights into brain dynamics, but the utility of model-derived bifurcation parameters as biomarkers remains underexplored. Objective: This study evaluates bifurcation parameters from a whole-brain network model as biomarkers for distinguishing brain states associated with resting-state and task-based cognitive conditions. Methods: Synthetic BOLD signals were generated using a supercritical Hopf brain network model to train deep learning models for bifurcation parameter prediction. Inference was performed on Human Connectome Project data, including both resting-state and task-based conditions. Statistical analyses assessed the separability of brain states based on bifurcation parameter distributions. Results: Bifurcation parameter distributions differed significantly across task and resting-state conditions ($p < 0.0001$ for all but one comparison). Task-based brain states exhibited higher bifurcation values compared to rest. Conclusion: Bifurcation parameters effectively differentiate cognitive and resting states, warranting further investigation as biomarkers for brain state characterization and neurological disorder assessment.

cs.lgq-bio.nc
q-bio q-bio 11-26 00:00

Complex multiannual cycles of Mycoplasma pneumoniae: persistence and the role of stochasticity

arXiv:2504.11402v3 Announce Type: replace Abstract: The epidemiological dynamics of Mycoplasma pneumoniae is characterized by poorly understood complex multiannual cycles. The origins of these cycles have long been debated, and multiple explanations of varying complexity have been suggested. Using Bayesian methods, we fit a dynamical model to half a century of M. pneumoniae surveillance data from Denmark (1958-1995, 2010-2025) and uncover a parsimonious explanation for the persistent cycles, based on the theory of quasicycles. The period of the multiannual cycle (approx. 5 years in Denmark) is explained by susceptible replenishment due, primarily, to loss of immunity. While an excellent fit to shorter time series (a few decades), the deterministic model eventually settles into an annual cycle, unable to reproduce the persistent cycles. We find that environmental stochasticity (e.g., varying contact rates) stabilizes the multiannual cycles and so does demographic noise, at least in smaller or incompletely mixing populations. The temporary disappearance of cycles during 1979-1985 is explained as a consequence of stochastic mode-hopping. The circulation of M. pneumoniae was recently disrupted by COVID-19 non-pharmaceutical interventions (NPIs), providing a natural experiment on the effects of large perturbations. Consequently, the effects of NPIs are included in the model and medium-term predictions are explored. Our findings highlight the intrinsic sensitivity of M. pneumoniae dynamics to perturbations and interventions, underscoring the limitations for long-term prediction. More generally, our findings provide further evidence for the role of stochasticity as a driver of complex cycles across endemic and recurring pathogens.

q-bio.penlin.cd
q-bio q-bio 11-26 00:00

Coevolutionary balance of resting-state brain networks in autism

arXiv:2507.09045v2 Announce Type: replace Abstract: Autism spectrum disorder (ASD) involves atypical brain organization, yet the large-scale functional principles underlying these alterations remain incompletely understood. Here we examine whether coevolutionary balance-a network-level energy measure derived from signed interactions and nodal activity states-captures disruptions in resting-state functional connectivity in autistic adults. Using ABIDE I resting-state fMRI data, we constructed whole-brain networks by combining binarized fALFF activity with signed functional correlations and quantified their coevolutionary energy. Compared with matched typically developing adults, the ASD group showed a characteristic redistribution of coevolutionary energy, with more negative global energy but higher (less negative) energy within the default mode network and altered energy in its interactions with dorsal attention and salience networks, indicating a reorganization rather than a uniform loss of balance in intrinsic network organization. These effects replicated across validation analyses with null models designed to disrupt link or node structure. Coevolutionary energy also showed modest but significant associations with ADI-R social and communication scores. Finally, incorporating coevolutionary features into a leakage-safe machine-learning classifier supported above-chance ASD versus typically developing (TD) discrimination on a held-out test set. These findings suggest that coevolutionary balance offers a compact, interpretable descriptor of altered resting-state network dynamics in autism.

q-bio.ncphysics.bio-ph
q-bio q-bio 11-26 00:00

Comment on "Direct Targeting and Regulation of RNA Polymerase II by Cell Signaling Kinases"

arXiv:2511.19444v1 Announce Type: new Abstract: Dabas et al. in Science 2025 report that approximately 117 human kinases directly phosphorylate the C-terminal domain (CTD) of RNA polymerase II (Pol II), proposing an extensive, direct biochemical bridge between signal transduction and transcriptional control. Such a sweeping claim that one-fourth of the human kinome directly targets the CTD represents a profound revision of canonical transcriptional biology. However, the evidence presented relies primarily on in vitro kinase assays using short CTD peptides, sparse in-cell validation, and mechanistically incomplete models of nuclear trafficking, chromatin targeting, structural compatibility, and catalytic specificity. In this extended critique, we demonstrate that the conclusions of this study are not supported by current biochemical, structural, cell biological, or genomic data. We outline severe shortcomings in assay design, lack of quantitative kinetics, incompatibilities with known Pol II structural constraints, unsupported assumptions about nuclear localization, inappropriate extension to "direct-at-gene" mechanisms, absence of global transcriptional effects, failure to align with the essential role of canonical CDKs, and missing transparency in dataset reporting. We conclude that the central claims of the study are premature and contradicted by decades of established transcriptional research. Substantial new evidence is required before revising the mechanistic model of Pol II CTD regulation.

q-bio.scq-bio.mnq-bio.cb
q-bio q-bio 11-26 00:00

Masked Autoencoder Joint Learning for Robust Spitzoid Tumor Classification

arXiv:2511.19535v1 Announce Type: new Abstract: Accurate diagnosis of spitzoid tumors (ST) is critical to ensure a favorable prognosis and to avoid both under- and over-treatment. Epigenetic data, particularly DNA methylation, provide a valuable source of information for this task. However, prior studies assume complete data, an unrealistic setting as methylation profiles frequently contain missing entries due to limited coverage and experimental artifacts. Our work challenges these favorable scenarios and introduces ReMAC, an extension of ReMasker designed to tackle classification tasks on high-dimensional data under complete and incomplete regimes. Evaluation on real clinical data demonstrates that ReMAC achieves strong and robust performance compared to competing classification methods in the stratification of ST. Code is available: https://github.com/roshni-mahtani/ReMAC.

cs.lgq-bio.qm
q-bio q-bio 11-26 00:00

Parallelism in Neurodegenerative Biomarker Tests: Hidden Errors and the Risk of Misconduct

arXiv:2511.19549v1 Announce Type: new Abstract: Biomarkers are critical tools in the diagnosis and monitoring of neurodegenerative diseases. Reliable quantification depends on assay validity, especially the demonstration of parallelism between diluted biological samples and the assay's standard curve. Inadequate parallelism can lead to biased concentration estimates, jeopardizing both clinical and research applications. Here we systematically review the evidence of analytical parallelism in body fluid (serum, plasma, cerebrospinal fluid) biomarker assays for neurodegeneration and evaluate the extent, reproducibility, and reporting quality of partial parallelism. This systematic review was registered on PROSPERO (CRD42024568766) and conducted in accordance with PRISMA guidelines. We included studies published between December 2010 to July 2024 without language restrictions. ... In conclusion, partial parallelism was infrequently observed and inconsistently reported in most biomarker assays for neurodegeneration. Narrow dilution ranges and variable methodologies limit generalizability. Transparent reporting of dilution protocols and adherence to established analytical validation guidelines is needed. This systematic review has practical implications for clinical trial design, regulatory approval processes, and the reliability of biomarker-based diagnostics.

stat.apq-bio.qm
q-bio q-bio 11-26 00:00

Population size in stochastic multi-patch ecological models

arXiv:2511.19743v1 Announce Type: new Abstract: We look at the interaction of dispersal and environmental stochasticity in $n$-patch models. We are able to prove persistence and extinction results even in the setting when the dispersal rates are stochastic. As applications we look at Beverton-Holt and Hassell functional responses. We find explicit approximations for the total population size at stationarity when we look at slow and fast dispersal. In particular, we show that if dispersal is small then in the Beverton-Holt setting, if the carrying capacity is random, then environmental fluctuations are always detrimental and decrease the total population size. Instead, in the Hassell setting, if the inverse of the carrying capacity is made random, then environmental fluctuations always increase the population size. Fast dispersal can save populations from extinction and therefore increase the total population size. We also analyze a different type of environmental fluctuation which comes from switching environmental states according to a Markov chain and find explicit approximations when the switching is either fast or slow - in examples we are able to show that slow switching leads to a higher population size than fast switching. Using and modifying some approximation results due to Cuello, we find expressions for the total population size in the $n=2$ patch setting when the growth rates, carrying capacities, and dispersal rates are influenced by random fluctuations. We find that there is a complicated interaction between the various terms and that the covariances between the various random parameters (growth rate, carrying capacity, dispersal rate) play a key role in whether we get an increase or a decrease in the total population size. Environmental fluctuations turn to sometimes be beneficial -- this show that not only dispersal, but also environmental stochasticity can lead to an increase in population size.

q-bio.pemath.pr
q-bio q-bio 11-26 00:00

Time-Varying Network Driver Estimation (TNDE) Quantifies Stage-Specific Regulatory Effects From Single-Cell Snapshots

arXiv:2511.19813v1 Announce Type: new Abstract: Identifying key driver genes governing biological processes such as development and disease progression remains a challenge. While existing methods can reconstruct cellular trajectories or infer static gene regulatory networks (GRNs), they often fail to quantify time-resolved regulatory effects within specific temporal windows. Here, we present Time-varying Network Driver Estimation (TNDE), a computational framework quantifying dynamic gene driver effects from single-cell snapshot data under a linear Markov assumption. TNDE leverages a shared graph attention encoder to preserve the local topological structure of the data. Furthermore, by incorporating partial optimal transport, TNDE accounts for unmatched cells arising from proliferation or apoptosis, thereby enabling trajectory alignment in non-equilibrium processes. Benchmarking on simulated datasets demonstrates that TNDE outperforms existing baseline methods across diverse complex regulatory scenarios. Applied to mouse erythropoiesis data, TNDE identifies stage-specific driver genes, the functional relevance of which is corroborated by biological validation. TNDE offers an effective quantitative tool for dissecting dynamic regulatory mechanisms underlying complex biological processes.

q-bio.mnstat.apstat.ml
q-bio q-bio 11-26 00:00

Human-computer interactions predict mental health

arXiv:2511.20179v1 Announce Type: new Abstract: Scalable assessments of mental illness, the leading driver of disability worldwide, remain a critical roadblock toward accessible and equitable care. Here, we show that human-computer interactions encode multiple dimensions of self-reported mental health and their changes over time. We introduce MAILA, a MAchine-learning framework for Inferring Latent mental states from digital Activity. We trained MAILA to predict 1.3 million mental-health self-reports from 20,000 cursor and touchscreen recordings recorded in 9,000 online participants. The dataset includes 2,000 individuals assessed longitudinally, 1,500 diagnosed with depression, and 500 with obsessive-compulsive disorder. MAILA tracks dynamic mental states along three orthogonal dimensions, generalizes across contexts, and achieves near-ceiling accuracy when predicting group-level mental health. The model translates from general to clinical populations, identifies individuals living with mental illness, and captures signatures of psychological function that are not conveyed by language. Our results demonstrate how everyday human-computer interactions can power passive, reliable, dynamic, and maximally scalable mental health assessments. The ability to decode mental states at zero marginal cost sets new benchmarks for precision medicine and public health, while raising important questions about privacy, agency, and autonomy online.

cs.hcq-bio.nccs.ai
q-bio q-bio 11-26 00:00

Mechano-chemical modeling of glia initiated secondary injury of neurons under mechanical load

arXiv:2511.20392v1 Announce Type: new Abstract: Traumatic Brain Injury (TBI) results from an impact or concussion to the head with the injury being specifically characterized through pathological degradation at various biological length scales. Following injury, various mechanical modeling techniques have been proposed in the literature that seek to quantify neuronal-scale to tissue-scale metrics of brain damage. Broadly, the two categories of degradation encompass physiological deterioration of neurons and upregulation of chemical entities such as neurotransmitters which causes initiation of downstream pathophysiological effects. Despite the many contributing pathways, in this work, we delineate and model a potential glia-initiated injury pathway that leads to secondary injury. The goal of this work is to demonstrate a continuum framework which models the multiphysics of mechano-chemical interactions underlying TBI. Using a coupled PDE (partial differential equation) formulation and FEM (finite element method) discretization, the framework highlights evolution of field variables which spatio-temporally resolve mechanical metrics and chemical species across neuronal clusters. The modeling domain encompasses microglia, neurons and the extracellular matrix. The continuum framework used to model the mechano-chemical interactions assumes a three dimensional viscoelastic network to capture the mechanical response underlying proteins constituting the neuron microstructure and advection-diffusion equations modeling spatio-temporal evolution of chemical species. We use this framework to numerically estimate key concentrations of chemical species produced by the strain field. In this work, we identify key biomarkers within the labyrinth of molecular pathways and build a framework that captures the core mechano-chemical interactions. This framework is an attempt to quantify secondary injury and thus assist in developing targeted TBI treatments.

q-bio.ncq-bio.qm
q-bio q-bio 11-26 00:00

MIMIC-MJX: Neuromechanical Emulation of Animal Behavior

arXiv:2511.20532v1 Announce Type: new Abstract: The primary output of the nervous system is movement and behavior. While recent advances have democratized pose tracking during complex behavior, kinematic trajectories alone provide only indirect access to the underlying control processes. Here we present MIMIC-MJX, a framework for learning biologically-plausible neural control policies from kinematics. MIMIC-MJX models the generative process of motor control by training neural controllers that learn to actuate biomechanically-realistic body models in physics simulation to reproduce real kinematic trajectories. We demonstrate that our implementation is accurate, fast, data-efficient, and generalizable to diverse animal body models. Policies trained with MIMIC-MJX can be utilized to both analyze neural control strategies and simulate behavioral experiments, illustrating its potential as an integrative modeling framework for neuroscience.

cs.roq-bio.nccs.ai
q-bio q-bio 11-26 00:00

Modeling Bioelectric State Transitions in Glial Cells: An ASAL-Inspired Computational Approach to Glioblastoma Initiation

arXiv:2511.19520v1 Announce Type: cross Abstract: Understanding how glioblastoma (GBM) emerges from initially healthy glial tissue requires models that integrate bioelectrical, metabolic, and multicellular dynamics. This work introduces an ASAL-inspired agent-based framework that simulates bioelectric state transitions in glial cells as a function of mitochondrial efficiency (Meff), ion-channel conductances, gap-junction coupling, and ROS dynamics. Using a 64x64 multicellular grid over 60,000 simulation steps, we show that reducing Meff below a critical threshold (~0.6) drives sustained depolarization, ATP collapse, and elevated ROS, reproducing key electrophysiological signatures associated with GBM. We further apply evolutionary optimization (genetic algorithms and MAP-Elites) to explore resilience, parameter sensitivity, and the emergence of tumor-like attractors. Early evolutionary runs converge toward depolarized, ROS-dominated regimes characterized by weakened electrical coupling and altered ionic transport. These results highlight mitochondrial dysfunction and disrupted bioelectric signaling as sufficient drivers of malignant-like transitions and provide a computational basis for probing the bioelectrical origins of oncogenesis.

q-bio.ncphysics.bio-phcs.ne
q-bio q-bio 11-26 00:00

When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics

arXiv:2511.19548v1 Announce Type: cross Abstract: Neuroeconomics promises to ground welfare analysis in neural and computational evidence about how people value outcomes, learn from experience and exercise self-control. At the same time, policy and commercial actors increasingly invoke neural data to justify paternalistic regulation, "brain-based" interventions and new welfare measures. This paper asks under what conditions neural data can legitimately inform welfare judgements for policy rather than merely describing behaviour. I develop a non-empirical, model-based framework that links three levels: neural signals, computational decision models and normative welfare criteria. Within an actor-critic reinforcement-learning model, I formalise the inference path from neural activity to latent values and prediction errors and then to welfare claims. I show that neural evidence constrains welfare judgements only when the neural-computational mapping is well validated, the decision model identifies "true" interests versus context-dependent mistakes, and the welfare criterion is explicitly specified and defended. Applying the framework to addiction, neuromarketing and environmental policy, I derive a Neuroeconomic Welfare Inference Checklist for regulators and for designers of NeuroAI systems. The analysis treats brains and artificial agents as value-learning systems while showing that internal reward signals, whether biological or artificial, are computational quantities and cannot be treated as welfare measures without an explicit normative model.

cs.lgq-bio.ncq-fin.eccs.cyecon.gncs.ai
q-bio q-bio 11-26 00:00

Expectation-enforcing strategies for repeated games

arXiv:2511.19828v1 Announce Type: cross Abstract: Originating in evolutionary game theory, the class of "zero-determinant" strategies enables a player to unilaterally enforce linear payoff relationships in simple repeated games. An upshot of this kind of payoff constraint is that it can shape the incentives for the opponent in a predetermined way. An example is when a player ensures that the agents get equal payoffs. While extensively studied in infinite-horizon games, extensions to discounted games, nonlinear payoff relationships, richer strategic environments, and behaviors with long memory remain incompletely understood. In this paper, we provide necessary and sufficient conditions for a player to enforce arbitrary payoff relationships (linear or nonlinear), in expectation, in discounted games. These conditions characterize precisely which payoff relationships are enforceable using strategies of arbitrary complexity. Our main result establishes that any such enforceable relationship can actually be implemented using a simple two-point reactive learning strategy, which conditions on the opponent's most recent action and the player's own previous mixed action, using information from only one round into the past. For additive payoff constraints, we show that enforcement is possible using even simpler (reactive) strategies that depend solely on the opponent's last move. In other words, this tractable class is universal within expectation-enforcing strategies. As examples, we apply these results to characterize extortionate, generous, equalizer, and fair strategies in the iterated prisoner's dilemma, asymmetric donation game, nonlinear donation game, and the hawk-dove game, identifying precisely when each class of strategy is enforceable and with what minimum discount factor.

econ.thcs.gtq-bio.pe
physics physics 11-26 00:00

gr-Orbit-Toolkit: A Python-Based Software for Simulating and Visualizing Relativistic Orbits

arXiv:2511.19442v1 Announce Type: new Abstract: Creating software dedicated to simulation is essential for teaching and research in Science, Technology, Engineering, and Mathematics (STEM). Physics lecturing can be more effective when digital twins are used to accompany theory classes. Research in physics has greatly benefited from the advent of modern, high-level programming languages, which facilitate the implementation of user-friendly code. Here, we report our own Python-based software, the gr-orbit-toolkit, to simulate orbits in classical and general relativistic scenarios. First, we present the ordinary differential equations (ODEs) for classical and relativistic orbital accelerations. For the latter, we follow a post-Newtonian approach. Second, we describe our algorithm, which numerically integrates these ODEs to simulate the orbits of small-sized objects orbiting around massive bodies by using Euler and Runge-Kutta methods. Then, we study a set of sample two-body models with either the Sun or a black hole in the center. Our simulations confirm that the orbital motions predicted by classical and relativistic ODEs drastically differ for bodies near the Schwarzschild radius of the central massive body. Classical mechanics explains the orbital motion of objects far away from a central massive body, but general relativity is required to study objects moving at close proximity to a massive body, where the gravitational field is strong. Our study on objects with different eccentricities confirms that our code captures relativistic orbital precession. Our convergence analysis shows the toolkit is numerically robust. Our gr-orbit-toolkit aims at facilitating teaching and research in general relativity, so a comprehensive user and developer guide is provided in the public code repository.

astro-ph.imgr-qcphysics.ed-ph
physics physics 11-26 00:00

Modeling Bioelectric State Transitions in Glial Cells: An ASAL-Inspired Computational Approach to Glioblastoma Initiation

arXiv:2511.19520v1 Announce Type: new Abstract: Understanding how glioblastoma (GBM) emerges from initially healthy glial tissue requires models that integrate bioelectrical, metabolic, and multicellular dynamics. This work introduces an ASAL-inspired agent-based framework that simulates bioelectric state transitions in glial cells as a function of mitochondrial efficiency (Meff), ion-channel conductances, gap-junction coupling, and ROS dynamics. Using a 64x64 multicellular grid over 60,000 simulation steps, we show that reducing Meff below a critical threshold (~0.6) drives sustained depolarization, ATP collapse, and elevated ROS, reproducing key electrophysiological signatures associated with GBM. We further apply evolutionary optimization (genetic algorithms and MAP-Elites) to explore resilience, parameter sensitivity, and the emergence of tumor-like attractors. Early evolutionary runs converge toward depolarized, ROS-dominated regimes characterized by weakened electrical coupling and altered ionic transport. These results highlight mitochondrial dysfunction and disrupted bioelectric signaling as sufficient drivers of malignant-like transitions and provide a computational basis for probing the bioelectrical origins of oncogenesis.

q-bio.ncphysics.bio-phcs.ne
physics physics 11-26 00:00

Experimental Demonstration of an On-Axis Laser Ranging Interferometer for Future Gravity Missions

arXiv:2511.19533v1 Announce Type: new Abstract: We experimentally demonstrate a novel interferometric architecture for next-generation gravity missions, featuring a laser ranging interferometer (LRI) that enables monoaxial transmission and reception of laser beams between two optical benches with a heterodyne frequency of 7.3 MHz. Active beam steering loops, utilizing differential wavefront sensing (DWS) signals, ensure co-alignment between the receiving (RX) beam and the transmitting (TX) beam. With spacecraft attitude jitter simulated by hexapod-driven rotations, the interferometric link achieves a pointing stability below 10 urad/$\mathrm{\sqrt{Hz}}$ in the frequency range between 2 mHz and 0.5 Hz, and the fluctuation of the TX beam's polarization state results in a reduction of 0.14\% in the carrier-to-noise-density ratio over a 15-hour continuous measurement. Additionally, tilt-to-length (TTL) coupling is experimentally investigated using the periodic scanning of the hexapod. Experimental results show that the on-axis LRI enables the inter-spacecraft ranging measurements with nanometer accuracy, making it a potential candidate for future GRACE-like missions.

physics.opticsphysics.ins-detastro-ph.im
physics physics 11-26 00:00

PhysDNet: Physics-Guided Decomposition Network of Side-Scan Sonar Imagery

arXiv:2511.19539v1 Announce Type: new Abstract: Side-scan sonar (SSS) imagery is widely used for seafloor mapping and underwater remote sensing, yet the measured intensity is strongly influenced by seabed reflectivity, terrain elevation, and acoustic path loss. This entanglement makes the imagery highly view-dependent and reduces the robustness of downstream analysis. In this letter, we present PhysDNet, a physics-guided multi-branch network that decouples SSS images into three interpretable fields: seabed reflectivity, terrain elevation, and propagation loss. By embedding the Lambertian reflection model, PhysDNet reconstructs sonar intensity from these components, enabling self-supervised training without ground-truth annotations. Experiments show that the decomposed representations preserve stable geological structures, capture physically consistent illumination and attenuation, and produce reliable shadow maps. These findings demonstrate that physics-guided decomposition provides a stable and interpretable domain for SSS analysis, improving both physical consistency and downstream tasks such as registration and shadow interpretation.

cs.cvphysics.ao-ph
physics physics 11-26 00:00

Designing Space-Time Metamaterials: The Central Role of Dispersion Engineering

arXiv:2511.19541v1 Announce Type: new Abstract: Space-time metamaterials are redefining wave engineering by enabling fully dynamic four-dimensional control of electromagnetic fields, allowing simultaneous manipulation of frequency, amplitude, momentum, and propagation direction. This unified functionality moves well beyond reciprocity-breaking mechanisms, marking a fundamental transition from static media to polychromatic, energy-efficient wave processors. This article establishes dispersion engineering as the core design paradigm for these dynamic systems. We show that the dispersion relation, linking frequency and wavenumber, serves as a master blueprint governing exotic wave phenomena such as nonreciprocity, beam splitting, asymmetric frequency conversion, amplification, spatial decomposition, and momentum bandgaps. By analyzing analytical dispersion surfaces and isofrequency contours in subluminal, luminal, and superluminal modulation regimes, we reveal how tailored spatiotemporal modulation orchestrates controlled energy flow among harmonic modes. We further demonstrate how this framework directly informs practical device operation, highlighting advanced implementations including angular-frequency beam multiplexing in superconducting Josephson junction arrays. Combining insights from wave theory, numerical modeling, and experimental realization, this work provides a comprehensive roadmap for leveraging dispersion engineering to design next-generation metamaterials for wireless communication, quantum technologies, and integrated photonics.

physics.opticsphysics.app-ph
physics physics 11-26 00:00

Perfectly Matched Metamaterials

arXiv:2511.19545v1 Announce Type: new Abstract: Fully harnessing the vast design space enabled by metamaterials to control electromagnetic (EM) fields remains an open problem for researchers. Inverse-design techniques have shown to best exploit the degrees of freedom available in design, resulting in high-performing systems for wireless communications, sensing and analog signal processing. Nonetheless, fundamental yet powerful properties of metamaterials are still to be revealed. In this paper, we introduce the concept of Perfectly Matched Metamaterials (PMMs). PMMs are passive, inhomogeneous media that perform purely refractive field transformations under different excitations. Their advantage lies in their simplicity, reflectionless behavior and suitability for both analytical and numerical design methods. Unlike Transformation Optics, PMM-based designs are devoid of coordinate transformations. Anisotropic unit cells are configured to control EM fields in a true-time delay manner. Simple analytical designs are reported which demonstrate the broadband capability of PMM devices. Proposed PMMs may find application in wideband beamforming and analog computing, realizing functionalities such as spatial filtering and signal pre-processing.

physics.opticsphysics.app-ph
physics physics 11-26 00:00

Astronomical Methods and Instrumentation in the Islamic World: Past, Present, Future

arXiv:2511.19559v1 Announce Type: new Abstract: From al-Sufi's tenth-century observation of the Andromeda Galaxy as a "little cloud" to contemporary space missions, Islamic astronomy represents a millennium-spanning tradition of innovation and knowledge. This study traces its trajectory through three phases: the Golden Age (8th to 15th centuries), when scholars such as al-Biruni, al-Battani, and Ibn Sina developed instruments, cataloged the heavens, and refined theories that later influenced Copernicus; a period of decline (late 15th to 17th centuries), shaped by political fragmentation, economic shifts, and the delayed adoption of technologies such as printing and the telescope; and today's revival, marked by observatory collaborations, Olympiad successes, and emerging space programs in Morocco, Iran, Turkey, the UAE, and Saudi Arabia. This comparative analysis with Chinese and European scientific traditions shows how Islamic astronomy provided a vital link in the global history of science, transmitting mathematical rigor, observational methods, and Arabic star names that are still used today. The contemporary resurgence signals the potential for renewed contributions to astrophysics, provided that it is supported by regional observatory networks, space-based research initiatives, and educational frameworks that integrate historical heritage with modern computational science.

physics.hist-phastro-ph.im
physics physics 11-26 00:00

Techno economic feasibility study of solar ORC in India

arXiv:2511.19564v1 Announce Type: new Abstract: Solar energy has enormous potential because there is a worldwide need to meet energy demands. Depleting non-renewable energy resources, increasing carbon emissions, and other environmental effects concern the scientific community to develop an alternative approach to electricity production. In this article, we present the study of a solar-powered Organic Rankine cycle considering Indian climatic conditions. Initially, we scrutinized seven working fluids and assessed their performance in the ORC at an evaporator pressure range of 9-30 bar and a mass flow rate range of 0.2 kg/s to 4.5 kg/s. For a fixed sink temperature of 298 K, we evaluate the system using four different power ratings of 2, 20, 50, and 100 kW based on four different source temperatures of 423 K, 403 K, 383 K, and 363 K. We estimate the system cost for each working fluid in each scenario separately. Our findings suggest that R 1233zd(E) is the optimum performing working fluid based on cost, cost-effectiveness, and environmental friendliness. We also notice that the estimated system scale cost is very competitive and could be a great alternative to the technologies already on the Indian market.

physics.soc-phphysics.app-ph
physics physics 11-26 00:00

Concept drift of simple forecast models as a diagnostic of low-frequency, regime-dependent atmospheric reorganisation

arXiv:2511.19638v1 Announce Type: new Abstract: Data-driven weather prediction models implicitly assume that the statistical relationship between predictors and targets is stationary. Under anthropogenic climate change, this assumption is violated, yet the structure of the resulting concept drift remains poorly understood. Here we introduce concept drift of simple forecast models as a diagnostic of atmospheric reorganisation. Using ERA5 reanalysis, we quantify drift in spatially explicit linear models of daily mean sea-level pressure and 2\,m temperature. Models are trained on the 1950s and 2000s and evaluated on 2020 tp 2024; their performance difference defines a local, interpretable drift metric. By decomposing errors by frequency band, circulation regime and region, and by mapping drift globally, we show that drift is dominated by low-frequency variability and is strongly regime-dependent. Over the North Atlantic-European sector, low-frequency drift peaks in positive NAO despite a stable large-scale NAO pattern, while Western European summer temperature drift is tightly linked to changes in land-atmosphere coupling rather than mean warming alone. In winter, extreme high-pressure frequencies increase mainly in neutral and negative NAO, whereas structural drift is concentrated in positive NAO and Alpine hotspots. Benchmarking against variance-based diagnostics shows that drift aligns much more with changes in temporal persistence than with changes in volatility or extremes. These findings demonstrate that concept drift can serve as a physically meaningful diagnostic of evolving predictability, revealing aspects of atmospheric reorganisation that are invisible to standard deviation and storm-track metrics.

stat.apphysics.ao-ph
physics physics 11-26 00:00

Active compensation of the AC Stark shift in a two-photon rubidium optical frequency reference using power modulation

arXiv:2511.19702v1 Announce Type: new Abstract: We implement a feedback protocol to suppress the AC Stark shift in a two-photon rubidium optical frequency reference, reducing its sensitivity to optical power variations by a factor of 1000. This method alleviates the tradeoff between short-term and long-term stability imposed by the AC Stark shift, enabling us to simultaneously achieve instabilities of $3\times10^{-14}$ at 1 s and $2\times10^{-14}$ at $10^4$ s. We also quantitatively describe, and experimentally explore, a stability limit imposed on clocks using this method by frequency noise on the local oscillator.

physics.atom-phphysics.app-ph
physics physics 11-26 00:00

Investigating impacts of dust events on atmospheric surface temperature in Southwest Asia using AERONET data, satellite recordings, and atmospheric models

arXiv:2511.19738v1 Announce Type: new Abstract: Dust layers have already been reported to have negative impacts on the radiation budget of the atmosphere. But the questions are: How does the atmospheric surface temperature change during a dust outbreak, and what is its temporal correlation with variations of the dust outbreak strength? We investigated these at selected AERONET sites, including Bahrain, IASBS, Karachi, KAUST Campus, Kuwait University, Lahore, Mezaira, Solar Village, in Southwest Asia, and Dushanbe in Central Asia, using available data from 1998 to 2024. The aerosol optical depth at 870 nm and the temperature recorded at each site are taken as measures of dust outbreak strength and atmospheric surface temperature, respectively. The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model and the aerosol optical depths recorded by the Moderate Resolution Imaging Spectroradiometers (MODIS) on board the Aqua and Terra satellites are used to specify the sources of the dust outbreaks. Our investigations show that in most cases, the temperature decreases during a dust outbreak, but in a considerable number of cases, the temperature rises. Temperature changes are mostly less than 5 {\deg}C. We found that a dust outbreak may affect the temperature even up to two days after its highest intensity time. This effect is more profound at sites far from large dust sources, such as IASBS in northwest Iran. For sites that are located on either a dust source or very close to it, the temperature and dust optical depth vary almost synchronously.

physics.opticsastro-ph.epphysics.ao-ph
physics physics 11-26 00:00

Beam Steering and Radiation Generation of Electrons in Bent Crystals in the Sub-GeV Domain

arXiv:2511.19807v1 Announce Type: new Abstract: We present an investigation into beam steering and radiation emission by sub-GeV electrons traversing bent silicon crystals. Using 855, 600, and 300~MeV electron beams at the Mainz Microtron (MAMI), we explored orientational coherent effects and particle dynamics in a 15~$\mu$m-thick crystal bent along the (111) planes. Combined experimental and simulation analyses enabled the classification and quantitative assessment of the contributions from channeling, dechanneling, rechanneling, and volume capture to both beam deflection and radiation emission. Crystal steering remained effective even at 300~MeV, with measured channeling efficiencies exceeding 50\%, a record at such low energy. Channeling and volume reflection enhanced radiation emission by up to a factor of six compared to the misaligned orientation, highlighting strong orientational coherence effects in the sub-GeV regime. These findings confirm the feasibility of using bent crystals for efficient beam manipulation and high-intensity photon generation at low energies, supporting the development of novel light sources and beam control strategies at accelerator facilities operating in this energy range.

physics.acc-phhep-ex
physics physics 11-26 00:00

Unveiling the role of seepage forces in the acceleration of landslides creep

arXiv:2511.19815v1 Announce Type: new Abstract: In the context of global climate change, geological materials are increasingly destabilized by water flow and infiltration. We study the creeping dynamics of a densely monitored landslide in Western Norway to decipher the role of fluid flow in destabilizing this landslide. In {\AA}knes, approximately 50 million cubic meter of rock mass continuously creeps over a shear zone made of rock fragments, with seasonal accelerations that strongly correlate with rainfall. In this natural laboratory for fluid-induced frictional creep, unprecedented monitoring equipment reveals low fluid pressure across the shear zone, thereby challenging the dominant theory of fluid-driven instability in landslides. Here, we show that a generic micromechanical model can disentangle the effects of fluid flow from those of fluid pressure, and demonstrate that seepage forces applied by channelized flow along the shear zone are the main driver of creep accelerations. We conclude by discussing the significance of seepage forces, the implications for hazard mitigation and the broader applicability of our model to various geological contexts governed by friction across saturated shear zones.

physics.geo-phphysics.flu-dyn
physics physics 11-26 00:00

Calibration Plan for the SBC 10-kg Liquid Argon Detector with 100 eV Target Threshold

arXiv:2511.19817v1 Announce Type: new Abstract: The Scintillating Bubble Chamber (SBC) Collaboration is designing a new generation of low background, noble liquid bubble chamber experiments with sub-keV nuclear recoil threshold. These experiments combine the electronic recoil blindness of a bubble chamber with the energy resolution of noble liquid scintillation, and maintain electron recoil discrimination at higher degrees of superheat (lower nuclear recoil thresholds) than Freon-based bubble chambers. A 10-kg liquid argon bubble chamber has the potential to set world leading limits on the dark matter nucleon cross-section for $O(\mathrm{GeV}/c^{2})$ masses, and to perform a high statistics coherent elastic neutrino nuclear scattering measurement with reactor neutrinos. This work presents a detailed calibration plan to measure the detector response of these experiments, combining photoneutron scattering with two new techniques to induce sub-keV nuclear recoils: nuclear Thomson scattering and thermal neutron capture.

physics.ins-dethep-exnucl-ex
physics physics 11-26 00:00

Effect of cohesion on the gravity-driven evacuation of metal powder through Triply-Periodic Minimal Surface structures

arXiv:2511.19821v1 Announce Type: new Abstract: Evacuating the powder trapped inside the complex cavities of Triply Periodic Minimal Surface (TPMS) structures remains a major challenge in metal-powder-based additive manufacturing. The Discrete Element Method offers valuable insights into this evacuation process, enabling the design of effective de-powdering strategies. In this study, we simulate gravity-driven evacuation of trapped powders from inside unit cells of various TPMS structures. We systematically investigate the role of cohesive energy density in shaping the discharge profile. Overall, we conclude that the Schwarz-P and Gyroid topologies enable the most efficient powder evacuation, remaining resilient to cohesion-induced flow hindrance. Furthermore, for the two unit cells, we analyse detailed kinematics and interpret the results in relation to particle overlaps and contact force distributions.

cond-mat.softphysics.comp-ph
physics physics 11-26 00:00

Nanophotonic magnetometry in a spin-dense diamond cavity

arXiv:2511.19831v1 Announce Type: new Abstract: Quantum sensors based on the nitrogen-vacancy (NV) center in diamond are leading platforms for high-sensitivity magnetometry with nanometer-scale resolution. State-of-the-art implementations, however, typically rely on bulky free-space optics or sacrifice spatial resolution to achieve high sensitivities. Here, we realize an integrated platform that overcomes this trade-off by fabricating monolithic whispering-gallery-mode cavities from a diamond chip containing a high density of NV centers and by evanescently coupling excitation to and photoluminescence from the cavity using a tapered optical fiber. Employing a lock-in-amplified Ramsey magnetometry scheme, we achieve a photon-shot-noise-limited DC sensitivity of $58\,\text{nT}/\sqrt{\text{Hz}}$ -- the best sensitivity reported to date for a nanofabricated cavity-based magnetometer. The microscopic cavity size enables sub-micrometer-scale spatial resolution and low-power operation, while fiber-coupling provides a path to scalable on-chip integration. Arrays of such sensors could enable NV-NMR spectroscopy of sub-nanoliter samples, new magnetic-gradient imaging architectures, and compact biosensing platforms.

physics.opticsquant-ph
math math 11-26 00:00

Polynomial Algorithms for Simultaneous Unitary Similarity and Equivalence

arXiv:2511.19439v1 Announce Type: new Abstract: We present an algorithm to solve the Simultaneous Unitary Similarity(S.U.S) problem which is to check if there exists a Similarity transformation determined by a Unitary $U$ s.t $UA_lU^*=B_l$, $l \in \{1,...,p\}$, where $A_l$ and $B_l$ are $nxn$ complex matrices. We observe that the problem is simplest when $U$ is diagonal, where we see that the `paths' in the graph defined by non-zero elements of $A_l$ and $B_l$ determine the solution. Inspired by this we generalize this to the case when $U$ is block-diagonal to identify a form refered to as the `Solution-form' using `paths' determined by non-zero sub-matrices of $A_l,B_l$ which are non-zero multiples of Unitary. When not in Solution form we find an equivalent problem to solve by diagonalizing a Hermitian or a Normal matrix related to the sub-matrices. The problem is solved in a maximum of $n$ steps. The same idea can be extended to solve the Simultaneous Unitary Equivalence(S.U.Eq) problem where we solve for $U,V$ in $UA_lV^*=B_l$, $A_l,B_l$ being $mxn$ Complex rectangular matrices. Here we work with the 'paths' in the related bi-graph to define the Solution-form. The algorithms have a complexity of $O(pn^4)$. This work finds application in Quantum Evolution, Quantum gate design and Simulation.The salient features of each step of the algorithm can be retained as Canonical features to classify a given collection of complex matrices up to Unitary Similarity.

cs.dsquant-phmath.ra
math math 11-26 00:00

The Quality of Information: A Weighted Entropy Approach to Near-Optimal Mastermind

arXiv:2511.19446v1 Announce Type: new Abstract: This paper presents a novel class of information-theoretic strategies for solving the game of Mastermind, achieving state-of-the-art performance among known heuristic methods. The core contribution is the application of a weighted entropy heuristic, based on the Belis-Guias, u framework, which assigns context- dependent utility values to each of the possible feedback types. A genetic algorithm optimization approach discovers interpret-able weight patterns that reflect strategic game dynamics. First, I demonstrate that a single, fixed vector of optimized weights achieves a remarkable 4.3565 average guesses with a maximum of 5. Building upon this, I introduce a stage-weighted heuristic with distinct utility vectors for each turn, achieving 4.3488 average guesses with a maximum of 6, approaching the theoretical optimum of 4.3403 by less than 0.2%. The method retains the computational efficiency of classical one-step-ahead heuristics while significantly improving performance through principled information valuation. A complete implementation and all optimized parameters are provided for full reproducibility.

cs.itcs.gtmath.it
math math 11-26 00:00

Linear Geometry: flats, ranks, regularity, parallelity

arXiv:2511.19455v1 Announce Type: new Abstract: Linear Geometry describes geometric properties that depend on the fundamental notion of a line. In this paper we survey basic notions and results of Linear Geomery that depend on the flat hulls: flats, exchange, rank, regularity, modularity, and parallelity.

math.homath.mg
math math 11-26 00:00

Taffy, Trees, and Tangles

arXiv:2511.19461v1 Announce Type: new Abstract: We study the relationship between three combinatorial objects -- a taffy pulling machine, the Calkin-Wilf tree of all fractions, and Conway's rational tangles. After introducing these objects, we develop a taffy analogue for Conway's characterization of rational tangles, and we give a direct geometric connection between rational tangles and taffy pulls.

math.homath.gtmath.co
math math 11-26 00:00

Euler's work on spherical geometry: An overview with comments

arXiv:2511.19531v1 Announce Type: new Abstract: We review Euler's work on spherical geometry. After an introduction concerning the general place that trigonometric formulae occupy in geometry, we start by the two memoirs of Euler on spherical trigonometry, in which he establishes the trigonometric formulae using different methods, namely, the calculus of variations in the first memoir, and classical methods of solid geometry in the other. In another memoir, Euler gives several formulae for the area of a spherical triangle in terms of its side lengths (these are ``spherical Heron formulae''). He uses this in the computation of numerical values of the solid angles of the five regular polyhedra, which is his goal in his memoir. We then review memoirs in which Euler systematically starts by establishing a theorem or a construction in Euclidean geometry and then proves an analogue in spherical geometry. We point out relations between Euler's memoirs on spherical trigonometry and works he did in astronomy, on the problem of drawing geographical maps, and in geomagnetism. We also review some other works of Euler involving spheres, including a memoir on the three-dimensional Apollonius problem and others concerning algebraic curves on the sphere. Even though these works are not properly on spherical geometry, they show Euler's interests in various questions related to spheres and we think that they are worth highlighting in such an overview. Beyond spherical geometry, the reader is invited to discover in this article an important facet of the work of the great Leonhard Euler. This article will appear as a chapter in the book ``Spherical geometry in the eighteenth century, I: Euler, Lagrange and Lambert'', Springer, 2026.

math.homath.gt
math math 11-26 00:00

The Ginzburg-Landau equations: Vortex states and numerical multiscale approximations

arXiv:2511.19540v1 Announce Type: new Abstract: In this review article, we provide an overview of recent advances in the numerical approximation of minimizers of the Ginzburg-Landau energy in multiscale spaces. Such minimizers represent the most stable states of type-II superconductors and, for large material parameters $\kappa$, capture the formation of lattices of quantized vortices. As the vortex cores shrink with increasing $\kappa$, while their number grows, it is essential to understand how $\kappa$ should couple to the mesh size in order to correctly resolve the vortex patterns in numerical simulations. We summarize and discuss recent developments based on LOD (Localized Orthogonal Decomposition) multiscale methods and review the corresponding error estimates that explicitly reflect the $\kappa$-dependence and the observed superconvergence. In addition, we include several minor refinements and extensions of existing results by incorporating techniques from recent contributions to the field. Finally, numerical experiments are presented to illustrate and support the theoretical findings.

math.nacs.na
math math 11-26 00:00

The Semiotic Channel Principle: Measuring the Capacity for Meaning in LLM Communication

arXiv:2511.19550v1 Announce Type: new Abstract: This paper proposes a novel semiotic framework for analyzing Large Language Models (LLMs), conceptualizing them as stochastic semiotic engines whose outputs demand active, asymmetric human interpretation. We formalize the trade-off between expressive richness (semiotic breadth) and interpretive stability (decipherability) using information-theoretic tools. Breadth is quantified as source entropy, and decipherability as the mutual information between messages and human interpretations. We introduce a generative complexity parameter (lambda) that governs this trade-off, as both breadth and decipherability are functions of lambda. The core trade-off is modeled as an emergent property of their distinct responses to $\lambda$. We define a semiotic channel, parameterized by audience and context, and posit a capacity constraint on meaning transmission, operationally defined as the maximum decipherability by optimizing lambda. This reframing shifts analysis from opaque model internals to observable textual artifacts, enabling empirical measurement of breadth and decipherability. We demonstrate the framework's utility across four key applications: (i) model profiling; (ii) optimizing prompt/context design; (iii) risk analysis based on ambiguity; and (iv) adaptive semiotic systems. We conclude that this capacity-based semiotic approach offers a rigorous, actionable toolkit for understanding, evaluating, and designing LLM-mediated communication.

cs.itcs.aimath.it
math math 11-26 00:00

One-Shot Coding and Applications

arXiv:2511.19556v1 Announce Type: new Abstract: One-shot information theory addresses scenarios in source coding and channel coding where the signal blocklength is assumed to be 1. In this case, each source and channel can be used only once, and the sources and channels are arbitrary and not required to be memoryless or ergodic. We study the achievability part of one-shot information theory, i.e., we consider explicit coding schemes in the oneshot scenario. The objective is to derive one-shot achievability results that can imply existing (first-order and second-order) asymptotic results when applied to memoryless sources and channels, or applied to systems with memory that behave ergodically. Poisson functional representation was first proposed as a one-shot channel simulation technique by Li and El Gamal [118] for proving a strong functional representation lemma. It was later extended to the Poisson matching lemma by Li and Anantharam [117], which provided a unified one-shot coding scheme for a broad class of information-theoretic problems. The main contribution of this thesis is to extend the applicability of Poisson functional representation to various more complicated scenarios, where the original version cannot be applied directly and further extensions must be developed.

cs.itmath.it
math math 11-26 00:00

The Fourier Ratio and complexity of signals

arXiv:2511.19560v1 Announce Type: new Abstract: We study the Fourier ratio of a signal $f:\mathbb Z_N\to\mathbb C$, \[ \mathrm{FR}(f)\ :=\ \sqrt{N}\,\frac{\|\widehat f\|_{L^1(\mu)}}{\|\widehat f\|_{L^2(\mu)}} \ =\ \frac{\|\widehat f\|_1}{\|\widehat f\|_2}, \] as a simple scalar parameter governing Fourier-side complexity, structure, and learnability. Using the Bourgain--Talagrand theory of random subsets of orthonormal systems, we show that signals concentrated on generic sparse sets necessarily have large Fourier ratio, while small $\mathrm{FR}(f)$ forces $f$ to be well-approximated in both $L^2$ and $L^\infty$ by low-degree trigonometric polynomials. Quantitatively, the class $\{f:\mathrm{FR}(f)\le r\}$ admits degree $O(r^2)$ $L^2$-approximants, which we use to prove that small Fourier ratio implies small algorithmic rate--distortion, a stable refinement of Kolmogorov complexity.

cs.itmath.camath.it
math math 11-26 00:00

A Hybrid Dominant-Interferer Approximation for SINR Coverage in Poisson Cellular Networks

arXiv:2511.19568v1 Announce Type: new Abstract: Accurate radio propagation and interference modeling is essential for the design and analysis of modern cellular networks. Stochastic geometry offers a rigorous framework by treating base station locations as a Poisson point process and enabling coverage characterization through spatial averaging, but its expressions often involve nested integrals and special functions that limit general applicability. Probabilistic interference models seek closed-form characterizations through moment-based approximations, yet these expressions remain tractable only for restricted parameter choices and become unwieldy when interference moments lack closed-form representations. This work introduces a hybrid approximation framework that addresses these challenges by combining Monte Carlo sampling of a small set of dominant interferers with a Laplace functional representation of the residual far-field interference. The resulting dominant-plus-tail structure provides a modular, numerically stable, and path-loss-agnostic estimator suitable for both noise-limited and interference-limited regimes. We further derive theoretical error bounds that decrease with the number of dominant interferers and validate the approach against established stochastic geometry and probabilistic modeling benchmarks.

cs.iteess.spmath.prmath.it
math math 11-26 00:00

The Parabolic K-motivic Hecke Category

arXiv:2511.19618v1 Announce Type: new Abstract: We define and study the parabolic K-motivic Hecke category of a (possibly disconnected) Kac-Moody group. Our main result is a combinatorial description via singular K-theory Soergel bimodules which arise from the equivariant algebraic K-theory of parabolic Bott-Samelson resolutions. In the spherical affine case, the K-motivic Hecke category serves as one side of a conjectural quantum K-theoretic derived Satake equivalence, addressing a conjecture of Cautis-Kamnitzer.

math.ktmath.agmath.rt
math math 11-26 00:00

A sufficient condition for generalized spectral characterization of graphs with loops

arXiv:2511.19625v1 Announce Type: new Abstract: Sufficient conditions for a simple graph to be characterized up to isomorphism given its spectrum and the spectrum of its complement graph are known due to Wang and Xu. This note establishes a related sufficient condition in the presence of loops: if the walk matrix has square-free determinant, then the graph is characterized by its generalized spectrum. The proof includes a general result about symmetric integral matrices.

math.ntmath.spmath.co
math math 11-26 00:00

Computer-aided Characterization of Fundamental Limits of Coded Caching with Linear Coding

arXiv:2511.19639v1 Announce Type: new Abstract: Inspired by prior work by Tian and by Cao and Xu, this paper presents an efficient computer-aided framework to characterize the fundamental limits of coded caching systems under the constraint of linear coding. The proposed framework considers non-Shannon-type inequalities which are valid for representable polymatroids (and hence for linear codes), and leverages symmetric structure and problem-specific constraints of coded caching to reduce the complexity of the linear program. The derived converse bounds are tighter compared to previous known analytic methods, and prove the optimality of some achievable memory-load tradeoff points under the constraint of linear coding placement and delivery. These results seem to indicate that small, structured demand subsets combined with minimal common information constructions may be sufficient to characterize optimal tradeoffs under linear coding.

cs.itmath.it
math math 11-26 00:00

Change Action Derivatives in Persistent Homology

arXiv:2511.19665v1 Announce Type: new Abstract: Persistent homology is a popular technique in topological data analysis that tracks the lifespans of homological features in a nested sequence of spaces. This data is typically presented in a multi-set called a persistence diagram or a barcode. For single parameter filtrations with homology coefficient taken in a principal ideal domain, the persistence diagram/barcode can be computed using the presentation theorem for finitely generated modules over a PID. One way to reconstruct the persistence diagram/barcode is to consider the rank of the pair group at all intervals, as defined by Edelsbrunner and Harer, which counts the number of homology classes whose lifespans are precisely said intervals respectively. In this paper we generalize the rank of the pair group for suitably `tame' filtrations, described as functors from a partially ordered set to a category of chain complexes, and show how it can be captured by a categorical version of the calculus of finite-differences for abelian groups.

math.atmath.ct
math math 11-26 00:00

Anytime-Feasible First-Order Optimization via Safe Sequential QCQP

arXiv:2511.19675v1 Announce Type: new Abstract: This paper presents the Safe Sequential Quadratically Constrained Quadratic Programming (SS-QCQP) algorithm, a first-order method for smooth inequality-constrained nonconvex optimization that guarantees feasibility at every iteration. The method is derived from a continuous-time dynamical system whose vector field is obtained by solving a convex QCQP that enforces monotonic descent of the objective and forward invariance of the feasible set. The resulting continuous-time dynamics achieve an $O(1/t)$ convergence rate to first-order stationary points under standard constraint qualification conditions. We then propose a safeguarded Euler discretization with adaptive step-size selection that preserves this convergence rate while maintaining both descent and feasibility in discrete time. To enhance scalability, we develop an active-set variant (SS-QCQP-AS) that selectively enforces constraints near the boundary, substantially reducing computational cost without compromising theoretical guarantees. Numerical experiments on a multi-agent nonlinear optimal control problem demonstrate that SS-QCQP and SS-QCQP-AS maintain feasibility, exhibit the predicted convergence behavior, and deliver solution quality comparable to second-order solvers such as SQP and IPOPT.

cs.romath.occs.syeess.sy
math math 11-26 00:00

Provably fully discrete energy-stable and asymptotic-preserving scheme for barotropic Euler equations

arXiv:2511.19679v1 Announce Type: new Abstract: We develop structure-preserving finite volume schemes for the barotropic Euler equations in the low Mach number regime. Our primary focus lies in ensuring both the asymptotic-preserving (AP) property and the discrete entropy stability. We construct an implicit-explicit (IMEX) method with suitable acoustic/advection splitting including implicit numerical diffusion that is independent of the Mach number. We prove the positivity of density, the entropy stability, and the asymptotic consistency of the fully discrete numerical method rigorously. Numerical experiments for benchmark problems validate the structure-preserving properties of the proposed method.

math.nacs.na
cs cs 11-26 00:00

Opt4GPTQ: Co-Optimizing Memory and Computation for 4-bit GPTQ Quantized LLM Inference on Heterogeneous Platforms

arXiv:2511.19438v1 Announce Type: new Abstract: The increasing adoption of large language model (LLMs) on heterogeneous computing platforms poses significant challenges for achieving high inference efficiency. To address the low inference efficiency of LLMs across diverse heterogeneous platforms, this paper proposes a practical optimization method, Opt4GPTQ, designed for 4-bit GPTQ quantized LLMs inference on heterogeneous AI accelerators. Built upon the vLLM serving system, Opt4GPTQ integrates three platform-level optimization strategies: Shared Memory Buffering optimization (SMB-Opt), which caches data in shared memory and employs single-threaded writes; Vectorized Memory Loading optimization (VML-Opt), which utilizes vectorized memory operations for efficient data loading; and Inline Assembly optimization (ILAOpt), which directly leverages hardware-native vector halfprecision addition and fused multiply-accumulate instructions for efficient execution. Experimental results show that Opt4GPTQ effectively improves inference performance across different models, achieving up to 84.42% throughput improvement and up to 51.35% latency reduction. This work highlights the critical role of platform-level engineering optimizations in enabling efficient LLMs inference on emerging heterogeneous AI acceleration architectures and provides valuable deployment experience and methodologies for future heterogeneous platform adaptation.

cs.dccs.pf
cs cs 11-26 00:00

Asynchronous Cooperative Optimization of a Capacitated Vehicle Routing Problem Solution

arXiv:2511.19445v1 Announce Type: new Abstract: We propose a parallel shared-memory schema to cooperatively optimize the solution of a Capacitated Vehicle Routing Problem instance with minimal synchronization effort and without the need for an explicit decomposition. To this end, we design FILO2$^x$ as a single-trajectory parallel adaptation of the FILO2 algorithm originally proposed for extremely large-scale instances and described in Accorsi and Vigo (2024). Using the locality of the FILO2 optimization applications, in FILO2$^x$ several possibly unrelated solution areas are concurrently asynchronously optimized. The overall search trajectory emerges as an iteration-based parallelism obtained by the simultaneous optimization of the same underlying solution performed by several solvers. Despite the high efficiency exhibited by the single-threaded FILO2 algorithm, the computational results show that, by better exploiting the available computing resources, FILO2$^x$ can greatly enhance the resolution time compared to the original approach, still maintaining a similar final solution quality for instances ranging from hundreds to hundreds of thousands customers.

cs.dccs.dm
cs cs 11-26 00:00

The Quality of Information: A Weighted Entropy Approach to Near-Optimal Mastermind

arXiv:2511.19446v1 Announce Type: new Abstract: This paper presents a novel class of information-theoretic strategies for solving the game of Mastermind, achieving state-of-the-art performance among known heuristic methods. The core contribution is the application of a weighted entropy heuristic, based on the Belis-Guias, u framework, which assigns context- dependent utility values to each of the possible feedback types. A genetic algorithm optimization approach discovers interpret-able weight patterns that reflect strategic game dynamics. First, I demonstrate that a single, fixed vector of optimized weights achieves a remarkable 4.3565 average guesses with a maximum of 5. Building upon this, I introduce a stage-weighted heuristic with distinct utility vectors for each turn, achieving 4.3488 average guesses with a maximum of 6, approaching the theoretical optimum of 4.3403 by less than 0.2%. The method retains the computational efficiency of classical one-step-ahead heuristics while significantly improving performance through principled information valuation. A complete implementation and all optimized parameters are provided for full reproducibility.

cs.itcs.gtmath.it
cs cs 11-26 00:00

Power sector models featuring individual BEV profiles: Assessing the time-accuracy trade-off

arXiv:2511.19449v1 Announce Type: new Abstract: Electrifying passenger cars will impact future power systems. To understand the challenges and opportunities that arise, it is necessary to reflect "sector coupling" in the modeling space. This paper focuses on a specific modeling approach that includes dozens of individual BEV profiles rather than one aggregated BEV profile. Although including additional BEV profiles increases model complexity and runtime, it avoids losing information in the aggregation process. We investigate how many profiles are needed to ensure the accuracy of the results and the extent to which fewer profiles can be traded for runtime efficiency gains. We also examine whether selecting specific profiles influences optimal results. We demonstrate that including too few profiles may result in distorted optimal solutions. However, beyond a certain threshold, adding more profiles does not significantly enhance the robustness of the results. More generally, for fleets of 5 to 20 million BEVs, we derive a rule of thumb consisting in including enough profiles such that each profile represents 200,000 to 250,000 vehicles, ensuring accurate results without excessive runtime.

eess.sycs.sy
cs cs 11-26 00:00

Strong Duality and Dual Ascent Approach to Continuous-Time Chance-Constrained Stochastic Optimal Control

arXiv:2511.19451v1 Announce Type: new Abstract: The paper addresses a continuous-time continuous-space chance-constrained stochastic optimal control (SOC) problem where the probability of failure to satisfy given state constraints is explicitly bounded. We leverage the notion of exit time from continuous-time stochastic calculus to formulate a chance-constrained SOC problem. Without any conservative approximation, the chance constraint is transformed into an expectation of an indicator function which can be incorporated into the cost function by considering a dual formulation. We then express the dual function in terms of the solution to a Hamilton-Jacobi-Bellman partial differential equation parameterized by the dual variable. Under a certain assumption on the system dynamics and cost function, it is shown that a strong duality holds between the primal chance-constrained problem and its dual. The Path integral approach is utilized to numerically solve the dual problem via gradient ascent using open-loop samples of system trajectories. We present simulation studies on chance-constrained motion planning for spatial navigation of mobile robots and the solution of the path integral approach is compared with that of the finite difference method.

cs.roeess.sycs.sy
cs cs 11-26 00:00

A Data-Driven Model Predictive Control Framework for Multi-Aircraft TMA Routing Under Travel Time Uncertainty

arXiv:2511.19452v1 Announce Type: new Abstract: This paper presents a closed-loop framework for conflict-free routing and scheduling of multi-aircraft in Terminal Manoeuvring Areas (TMA), aimed at reducing congestion and enhancing landing efficiency. Leveraging data-driven arrival inputs (either historical or predicted), we formulate a mixed-integer optimization model for real-time control, incorporating an extended TMA network spanning a 50-nautical-mile radius around Changi Airport. The model enforces safety separation, speed adjustments, and holding time constraints while maximizing runway throughput. A rolling-horizon Model Predictive Control (MPC) strategy enables closed-loop integration with a traffic simulator, dynamically updating commands based on real-time system states and predictions. Computational efficiency is validated across diverse traffic scenarios, demonstrating a 7-fold reduction in computation time during peak congestion compared to onetime optimization, using Singapore ADS-B dataset. Monte Carlo simulations under travel time disturbances further confirm the framework's robustness. Results highlight the approach's operational resilience and computational scalability, offering actionable decision support for Air Traffic Controller Officers (ATCOs) through real-time optimization and adaptive replanning.

cs.maeess.sycs.sy
cs cs 11-26 00:00

AVS: A Computational and Hierarchical Storage System for Autonomous Vehicles

arXiv:2511.19453v1 Announce Type: new Abstract: Autonomous vehicles (AVs) are evolving into mobile computing platforms, equipped with powerful processors and diverse sensors that generate massive heterogeneous data, for example 14 TB per day. Supporting emerging third-party applications calls for a general-purpose, queryable onboard storage system. Yet today's data loggers and storage stacks in vehicles fail to deliver efficient data storage and retrieval. This paper presents AVS, an Autonomous Vehicle Storage system that co-designs computation with a hierarchical layout: modality-aware reduction and compression, hot-cold tiering with daily archival, and a lightweight metadata layer for indexing. The design is grounded with system-level benchmarks on AV data that cover SSD and HDD filesystems and embedded indexing, and is validated on embedded hardware with real L4 autonomous driving traces. The prototype delivers predictable real-time ingest, fast selective retrieval, and substantial footprint reduction under modest resource budgets. The work also outlines observations and next steps toward more scalable and longer deployments to motivate storage as a first-class component in AV stacks.

cs.rocs.dbcs.dccs.os
cs cs 11-26 00:00

A K-means Inspired Solution Framework for Large-Scale Multi-Traveling Salesman Problems

arXiv:2511.19454v1 Announce Type: new Abstract: The Multi-Traveling Salesman Problem (MTSP) is a commonly used mathematical model for multi-agent task allocation. However, as the number of agents and task targets increases, existing optimization-based methods often incur prohibitive computational costs, posing significant challenges to large-scale coordination in unmanned systems. To address this issue, this paper proposes a K-means-inspired task allocation framework that reformulates the MTSP as a spatially constrained classification process. By leveraging spatial coherence, the proposed method enables fast estimation of path costs and efficient task grouping, thereby fundamentally reducing overall computational complexity. Extensive simulation results demonstrate that the framework can maintain high solution quality even in extremely large-scale scenarios-for instance, in tasks involving 1000 agents and 5000 targets. The findings indicate that this "cluster-then-route" decomposition strategy offers an efficient and reliable solution for large-scale multi-agent task allocation.

cs.roeess.sycs.sy
cs cs 11-26 00:00

Optimizations on Graph-Level for Domain Specific Computations in Julia and Application to QED

arXiv:2511.19456v1 Announce Type: new Abstract: Complex computational problems in science often consist of smaller parts that can have largely distinct compute requirements from one another. For optimal efficiency, analyzing each subtask and scheduling it on the best-suited hardware would be necessary. Other considerations must be taken into account, too, such as parallelism, dependencies between different subtasks, and data transfer speeds between devices. To achieve this, directed acyclic graphs are often employed to represent these problems and enable utilizing as much hardware as possible on a given machine. In this paper, we present a software framework written in Julia capable of automatically and dynamically producing statically scheduled and compiled code. We lay theoretical foundations and add domain-specific information about the computation to the existing concepts of DAG scheduling, enabling optimizations that would otherwise be impossible. To illustrate the theory we implement an example application: the computation of matrix elements for scattering processes with many external particles in quantum electrodynamics.

cs.dccs.pf
cs cs 11-26 00:00

SparOA: Sparse and Operator-aware Hybrid Scheduling for Edge DNN Inference

arXiv:2511.19457v1 Announce Type: new Abstract: The resource demands of deep neural network (DNN) models introduce significant performance challenges, especially when deployed on resource-constrained edge devices. Existing solutions like model compression often sacrifice accuracy, while specialized hardware remains costly and inflexible. Hybrid inference methods, however, typically overlook how operator characteristics impact performance. In this work, we present SparOA, a CPU-GPU hybrid inference framework, which leverages both sparsity and computational intensity to optimize operator scheduling. SparOA embraces aforementioned challenges through three key components: (1) a threshold predictor that accurately determines optimal sparsity and computational intensity thresholds; (2) a reinforcement learning-based scheduler that dynamically optimizes resource allocation based on real-time hardware states; and (3) a hybrid inference engine that enhances efficiency through asynchronous execution and batch size optimization.Extensive results show that SparOA achieves an average speedup of 1.22-1.31x compared to all baselines, and outperforms the CPU-Only by up to 50.7x. Also, SparOA achieves optimal energy-per-inference, consuming 7\%-16\% less energy than the SOTA co-execution baseline.

cs.dccs.ai
cs cs 11-26 00:00

Personalized Reward Modeling for Text-to-Image Generation

arXiv:2511.19458v1 Announce Type: new Abstract: Recent text-to-image (T2I) models generate semantically coherent images from textual prompts, yet evaluating how well they align with individual user preferences remains an open challenge. Conventional evaluation methods, general reward functions or similarity-based metrics, fail to capture the diversity and complexity of personal visual tastes. In this work, we present PIGReward, a personalized reward model that dynamically generates user-conditioned evaluation dimensions and assesses images through CoT reasoning. To address the scarcity of user data, PIGReward adopt a self-bootstrapping strategy that reasons over limited reference data to construct rich user contexts, enabling personalization without user-specific training. Beyond evaluation, PIGReward provides personalized feedback that drives user-specific prompt optimization, improving alignment between generated images and individual intent. We further introduce PIGBench, a per-user preference benchmark capturing diverse visual interpretations of shared prompts. Extensive experiments demonstrate that PIGReward surpasses existing methods in both accuracy and interpretability, establishing a scalable and reasoning-based foundation for personalized T2I evaluation and optimization. Taken together, our findings highlight PIGReward as a robust steptoward individually aligned T2I generation.

cs.cvcs.ai
cs cs 11-26 00:00

Systemic approach for modeling a generic smart grid

arXiv:2511.19460v1 Announce Type: new Abstract: Smart grid technological advances present a recent class of complex interdisciplinary modeling and increasingly difficult simulation problems to solve using traditional computational methods. To simulate a smart grid requires a systemic approach to integrated modeling of power systems, energy markets, demand-side management, and much other resources and assets that are becoming part of the current paradigm of the power grid. This paper presents a backbone model of a smart grid to test alternative scenarios for the grid. This tool simulates disparate systems to validate assumptions before the human scale model. Thanks to a distributed optimization of subsystems, the production and consumption scheduling is achieved while maintaining flexibility and scalability.

cs.dccs.syeess.sycs.ai
cs cs 11-26 00:00

Urban Buildings Energy Consumption Estimation Using HPC: A Case Study of Bologna

arXiv:2511.19463v1 Announce Type: new Abstract: Urban Building Energy Modeling (UBEM) plays a central role in understanding and forecasting energy consumption at the city scale. In this work, we present a UBEM pipeline that integrates EnergyPlus simulations, high-performance computing (HPC), and open geospatial datasets to estimate the energy demand of buildings in Bologna, Italy. Geometric information including building footprints and heights was obtained from the Bologna Open Data portal and enhanced with aerial LiDAR measurements. Non-geometric attributes such as construction materials, insulation characteristics, and window performance were derived from regional building regulations and the European TABULA database. The computation was carried out on Leonardo, the Cineca-hosted supercomputer, enabling the simulation of approximately 25,000 buildings in under 30 minutes.

cs.dcphysics.app-ph
cs cs 11-26 00:00

Temperature in SLMs: Impact on Incident Categorization in On-Premises Environments

arXiv:2511.19464v1 Announce Type: new Abstract: SOCs and CSIRTs face increasing pressure to automate incident categorization, yet the use of cloud-based LLMs introduces costs, latency, and confidentiality risks. We investigate whether locally executed SLMs can meet this challenge. We evaluated 21 models ranging from 1B to 20B parameters, varying the temperature hyperparameter and measuring execution time and precision across two distinct architectures. The results indicate that temperature has little influence on performance, whereas the number of parameters and GPU capacity are decisive factors.

cs.lgcs.dccs.crcs.pfcs.ai
cs cs 11-26 00:00

Hidden markov model to predict tourists visited place

arXiv:2511.19465v1 Announce Type: new Abstract: Nowadays, social networks are becoming a popular way of analyzing tourist behavior, thanks to the digital traces left by travelers during their stays on these networks. The massive amount of data generated; by the propensity of tourists to share comments and photos during their trip; makes it possible to model their journeys and analyze their behavior. Predicting the next movement of tourists plays a key role in tourism marketing to understand demand and improve decision support. In this paper, we propose a method to understand and to learn tourists' movements based on social network data analysis to predict future movements. The method relies on a machine learning grammatical inference algorithm. A major contribution in this paper is to adapt the grammatical inference algorithm to the context of big data. Our method produces a hidden Markov model representing the movements of a group of tourists. The hidden Markov model is flexible and editable with new data. The capital city of France, Paris is selected to demonstrate the efficiency of the proposed methodology.

cs.lgcs.ai
cs cs 11-26 00:00

SG-OIF: A Stability-Guided Online Influence Framework for Reliable Vision Data

arXiv:2511.19466v1 Announce Type: new Abstract: Approximating training-point influence on test predictions is critical for deploying deep-learning vision models, essential for locating noisy data. Though the influence function was proposed for attributing how infinitesimal up-weighting or removal of individual training examples affects model outputs, its implementation is still challenging in deep-learning vision models: inverse-curvature computations are expensive, and training non-stationarity invalidates static approximations. Prior works use iterative solvers and low-rank surrogates to reduce cost, but offline computation lags behind training dynamics, and missing confidence calibration yields fragile rankings that misidentify critical examples. To address these challenges, we introduce a Stability-Guided Online Influence Framework (SG-OIF), the first framework that treats algorithmic stability as a real-time controller, which (i) maintains lightweight anchor IHVPs via stochastic Richardson and preconditioned Neumann; (ii) proposes modular curvature backends to modulate per-example influence scores using stability-guided residual thresholds, anomaly gating, and confidence. Experimental results show that SG-OIF achieves SOTA (State-Of-The-Art) on noise-label and out-of-distribution detection tasks across multiple datasets with various corruption. Notably, our approach achieves 91.1\% accuracy in the top 1\% prediction samples on the CIFAR-10 (20\% asym), and gets 99.8\% AUPR score on MNIST, effectively demonstrating that this framework is a practical controller for online influence estimation.

cs.lgcs.cvcs.ai
cs cs 11-26 00:00

Towards a future space-based, highly scalable AI infrastructure system design

arXiv:2511.19468v1 Announce Type: new Abstract: If AI is a foundational general-purpose technology, we should anticipate that demand for AI compute -- and energy -- will continue to grow. The Sun is by far the largest energy source in our solar system, and thus it warrants consideration how future AI infrastructure could most efficiently tap into that power. This work explores a scalable compute system for machine learning in space, using fleets of satellites equipped with solar arrays, inter-satellite links using free-space optics, and Google tensor processing unit (TPU) accelerator chips. To facilitate high-bandwidth, low-latency inter-satellite communication, the satellites would be flown in close proximity. We illustrate the basic approach to formation flight via a 81-satellite cluster of 1 km radius, and describe an approach for using high-precision ML-based models to control large-scale constellations. Trillium TPUs are radiation tested. They survive a total ionizing dose equivalent to a 5 year mission life without permanent failures, and are characterized for bit-flip errors. Launch costs are a critical part of overall system cost; a learning curve analysis suggests launch to low-Earth orbit (LEO) may reach $\lesssim$\$200/kg by the mid-2030s.

cs.lgcs.dc
cs cs 11-26 00:00

Quantifying Modality Contributions via Disentangling Multimodal Representations

arXiv:2511.19470v1 Announce Type: new Abstract: Quantifying modality contributions in multimodal models remains a challenge, as existing approaches conflate the notion of contribution itself. Prior work relies on accuracy-based approaches, interpreting performance drops after removing a modality as indicative of its influence. However, such outcome-driven metrics fail to distinguish whether a modality is inherently informative or whether its value arises only through interaction with other modalities. This distinction is particularly important in cross-attention architectures, where modalities influence each other's representations. In this work, we propose a framework based on Partial Information Decomposition (PID) that quantifies modality contributions by decomposing predictive information in internal embeddings into unique, redundant, and synergistic components. To enable scalable, inference-only analysis, we develop an algorithm based on the Iterative Proportional Fitting Procedure (IPFP) that computes layer and dataset-level contributions without retraining. This provides a principled, representation-level view of multimodal behavior, offering clearer and more interpretable insights than outcome-based metrics.

cs.lgcs.aics.cl
cs cs 11-26 00:00

PrefixGPT: Prefix Adder Optimization by a Generative Pre-trained Transformer

arXiv:2511.19472v1 Announce Type: new Abstract: Prefix adders are widely used in compute-intensive applications for their high speed. However, designing optimized prefix adders is challenging due to strict design rules and an exponentially large design space. We introduce PrefixGPT, a generative pre-trained Transformer (GPT) that directly generates optimized prefix adders from scratch. Our approach represents an adder's topology as a two-dimensional coordinate sequence and applies a legality mask during generation, ensuring every design is valid by construction. PrefixGPT features a customized decoder-only Transformer architecture. The model is first pre-trained on a corpus of randomly synthesized valid prefix adders to learn design rules and then fine-tuned to navigate the design space for optimized design quality. Compared with existing works, PrefixGPT not only finds a new optimal design with a 7.7% improved area-delay product (ADP) but exhibits superior exploration quality, lowering the average ADP by up to 79.1%. This demonstrates the potential of GPT-style models to first master complex hardware design principles and then apply them for more efficient design optimization.

cs.lgcs.arcs.ai
cs cs 11-26 00:00

WavefrontDiffusion: Dynamic Decoding Schedule or Improved Reasoning

arXiv:2511.19473v1 Announce Type: new Abstract: Diffusion Language Models (DLMs) have shown strong potential for text generation and are becoming a competitive alternative to autoregressive models. The denoising strategy plays an important role in determining the quality of their outputs. Mainstream denoising strategies include Standard Diffusion and BlockDiffusion. Standard Diffusion performs global denoising without restricting the update range, often finalizing incomplete context and causing premature end-of-sequence predictions. BlockDiffusion updates fixed-size blocks in a preset order, but its rigid structure can break apart coherent semantic units and disrupt reasoning. We present WavefrontDiffusion, a dynamic decoding approach that expands a wavefront of active tokens outward from finalized positions. This adaptive process follows the natural flow of semantic structure while keeping computational cost equal to block-based methods. Across four benchmarks in reasoning and code generation, WavefrontDiffusion achieves state-of-the-art performance while producing outputs with higher semantic fidelity, showing the value of adaptive scheduling for more coherent and efficient generation.

cs.lgcs.ai
cs cs 11-26 00:00

Pistachio: Towards Synthetic, Balanced, and Long-Form Video Anomaly Benchmarks

arXiv:2511.19474v1 Announce Type: new Abstract: Automatically detecting abnormal events in videos is crucial for modern autonomous systems, yet existing Video Anomaly Detection (VAD) benchmarks lack the scene diversity, balanced anomaly coverage, and temporal complexity needed to reliably assess real-world performance. Meanwhile, the community is increasingly moving toward Video Anomaly Understanding (VAU), which requires deeper semantic and causal reasoning but remains difficult to benchmark due to the heavy manual annotation effort it demands. In this paper, we introduce Pistachio, a new VAD/VAU benchmark constructed entirely through a controlled, generation-based pipeline. By leveraging recent advances in video generation models, Pistachio provides precise control over scenes, anomaly types, and temporal narratives, effectively eliminating the biases and limitations of Internet-collected datasets. Our pipeline integrates scene-conditioned anomaly assignment, multi-step storyline generation, and a temporally consistent long-form synthesis strategy that produces coherent 41-second videos with minimal human intervention. Extensive experiments demonstrate the scale, diversity, and complexity of Pistachio, revealing new challenges for existing methods and motivating future research on dynamic and multi-event anomaly understanding.

cs.mmcs.cvcs.ai
cs cs 11-26 00:00

Tracking and Segmenting Anything in Any Modality

arXiv:2511.19475v1 Announce Type: new Abstract: Tracking and segmentation play essential roles in video understanding, providing basic positional information and temporal association of objects within video sequences. Despite their shared objective, existing approaches often tackle these tasks using specialized architectures or modality-specific parameters, limiting their generalization and scalability. Recent efforts have attempted to unify multiple tracking and segmentation subtasks from the perspectives of any modality input or multi-task inference. However, these approaches tend to overlook two critical challenges: the distributional gap across different modalities and the feature representation gap across tasks. These issues hinder effective cross-task and cross-modal knowledge sharing, ultimately constraining the development of a true generalist model. To address these limitations, we propose a universal tracking and segmentation framework named SATA, which unifies a broad spectrum of tracking and segmentation subtasks with any modality input. Specifically, a Decoupled Mixture-of-Expert (DeMoE) mechanism is presented to decouple the unified representation learning task into the modeling process of cross-modal shared knowledge and specific information, thus enabling the model to maintain flexibility while enhancing generalization. Additionally, we introduce a Task-aware Multi-object Tracking (TaMOT) pipeline to unify all the task outputs as a unified set of instances with calibrated ID information, thereby alleviating the degradation of task-specific knowledge during multi-task training. SATA demonstrates superior performance on 18 challenging tracking and segmentation benchmarks, offering a novel perspective for more generalizable video understanding.

cs.mmcs.cvcs.ai
econ econ 11-26 00:00

Big Wins, Small Net Gains: Direct and Spillover Effects of First Industry Entries in Puerto Rico

arXiv:2511.19469v1 Announce Type: new Abstract: I study how first sizable industry entries reshape local and neighboring labor markets in Puerto Rico. Using over a decade of quarterly municipality--industry data (2014Q1--2025Q1), I identify ``first sizable entries'' as large, persistent jumps in establishments, covered employment, and wage bill, and treat these as shocks to local industry presence at the municipio--industry level. Methodologically, I combine staggered-adoption difference-in-differences estimators that are robust to heterogeneous treatment timing with an imputation-based event-study approach, and I use a doubly robust difference-in-differences framework that explicitly allows for interference through pre-specified exposure mappings on a contiguity graph. The estimates show large and persistent direct gains in covered employment and wage bill in the treated municipality--industry cells over 0--16 quarters. Same-industry neighbors experience sizable short-run gains that reverse over the medium run, while within-municipality cross-industry and neighbor all-industries spillovers are small and imprecisely estimated. Once these spillovers are taken into account and spatially robust inference and sensitivity checks are applied, the net regional 0--16 quarter effect on covered employment is positive but modest in magnitude and estimated with considerable uncertainty. The results imply that first sizable entries generate substantial local gains where they occur, but much smaller and less precisely measured net employment gains for the broader regional economy, highlighting the importance of accounting for spatial spillovers when evaluating place-based policies.

q-fin.ecstat.meecon.gnecon.em
econ econ 11-26 00:00

Cash Transfers in the Perinatal Period and Child Welfare System Involvement Among Infants: Evidence from the Rx Kids Program in Flint, Michigan

arXiv:2511.19570v1 Announce Type: new Abstract: Infants are most vulnerable to child maltreatment, which may be due in part to economic instability during the perinatal period. In 2024, Rx Kids was launched in Flint, Michigan, achieving near 100% aggregate take up and providing every expectant mother with unconditional cash transfers during pregnancy and infancy. Synthetic difference-in-differences was used to compare changes in allegations of maltreatment within the first six months of life in Flint before and after implementation of Rx Kids relative to the corresponding change in control cities without the program. In the three years prior to the implementation of Rx Kids, the proportion of infants with a maltreatment allegation within the first six months of life was 21.7% in Flint and 19.5% among control cities. After implementation of Rx Kids in 2024, the maltreatment allegation rate dropped to 15.5% in Flint, falling below the maltreatment allegation rate of 20.6% among the control cities. Rx Kids was associated with a statistically significant 7.0 percentage-point decrease in the maltreatment allegation rate (p = 0.021), corresponding to a 32% decrease relative to the pre-intervention period. There was a decrease in the rate of neglect-related, non-neglect-related, and substantiated allegations; these were directionally consistent with the primary outcome but not statistically significant. Results were robust to alternative model specifications. The Rx Kids prenatal and infant cash prescription program led to a significant reduction in allegations of maltreatment among infants. These findings provide important evidence about the role of economic stability in preventing child welfare system involvement.

q-fin.ececon.gn
econ econ 11-26 00:00

Total Factor Productivity and its determinants: an analysis of the relationship at firm level through unsupervised learning techniques

arXiv:2511.19627v1 Announce Type: new Abstract: The paper is related to the identification of firm's features which serve as determinants for firm's total factor productivity through unsupervised learning techniques (principal component analysis, self organizing maps, clustering). This bottom-up approach can effectively manage the problem of the heterogeneity of the firms and provides new ways to look at firms' standard classifications. Using the large sample provided by the ORBIS database, the analyses covers the years before the outbreak of Covid-19 (2015-2019) and the immediate post-Covid period (year 2020). It has been shown that in both periods, the main determinants of productivity growth are related to profitability, credit/debts measures, cost and capital efficiency, and effort and outcome of the R&D activity conducted by the firms. Finally, a linear relationship between determinants and productivity growth has been found.

q-fin.ececon.gn
econ econ 11-26 00:00

Spiral of Silence: How Neutral Moderation Polarizes Content Creation

arXiv:2511.19680v1 Announce Type: new Abstract: This paper investigates how content moderation affects content creation in an ideologically diverse online environments. We develop a model in which users act as both creators and consumers, differing in their ideological affiliation and propensity to produce toxic content. Affective polarization, i.e., users' aversion to ideologically opposed content, interacts with moderation in unintended ways. We show that even ideologically neutral moderation that targets only toxicity can suppress non-toxic content creation, particularly from ideological minorities. Our analysis reveals a content-level externality: when toxic content is removed, non-toxic posts gain exposure. While creators from the ideological majority group sometimes benefit from this exposure, they do not internalize the negative spillovers, i.e., increased out-group animosity toward minority creators. This can discourage minority creation and polarize the content supply, ultimately leaving minority users in a more ideologically imbalanced environment: a mechanism reminiscent of the "spiral of silence." Thus, our model offers an alternative perspective to a common debate: what appears as bias in moderation needs not reflect bias in rules, but can instead emerge endogenously as self-censorship in equilibrium. We also extend the model to explore how content personalization interacts with moderation policies.

q-fin.ececon.gn
econ econ 11-26 00:00

Individual and group fairness in geographical partitioning

arXiv:2511.19722v1 Announce Type: new Abstract: Socioeconomic segregation often arises in school districting and other contexts, causing some groups to be over- or under-represented within a particular district. This phenomenon is closely linked with disparities in opportunities and outcomes. We formulate a new class of geographical partitioning problems in which the population is heterogeneous, and it is necessary to ensure fair representation for each group at each facility. We prove that the optimal solution is a novel generalization of the additively weighted Voronoi diagram, and we propose a simple and efficient algorithm to compute it, thus resolving an open question dating back to Dvoretzky et al. (1951). The efficacy and potential for practical insight of the approach are demonstrated in a realistic case study involving seven demographic groups and $78$ district offices.

cs.lgecon.em
econ econ 11-26 00:00

Institutional Learning and Volatility Transmission in ASEAN Equity Markets: A Network-Integrated Regime-Dependent Approach

arXiv:2511.19824v1 Announce Type: new Abstract: This paper investigates how institutional learning and regional spillovers shape volatility dynamics in ASEAN equity markets. Using daily data for Indonesia, Malaysia, the Philippines, and Thailand from 2010 to 2024, we construct a high-frequency institutional learning index via a MIDAS-EPU approach. Unlike existing studies that treat institutional quality as a static background characteristic, this paper models institutions as a dynamic mechanism that reacts to policy shocks, information pressure, and crisis events. Building on this perspective, we introduce two new volatility frameworks: the Institutional Response Dynamics Model (IRDM), which embeds crisis memory, policy shocks, and information flows; and the Network-Integrated IRDM (N-IRDM), which incorporates dynamic-correlation and institutional-similarity networks to capture cross-market transmission. Empirical results show that institutional learning amplifies short-run sensitivity to shocks yet accelerates post-crisis normalization. Crisis-memory terms explain prolonged volatility clustering, while network interactions improve tail behavior and short-horizon forecasts. Robustness checks using placebo and lagged networks indicate that spillovers reflect a strong regional common factor rather than dependence on specific correlation topologies. Diebold-Mariano and ENCNEW tests confirm that the N-IRDM significantly outperforms baseline GARCH benchmarks. The findings highlight a dual role of institutions and offer policy insights on transparency enhancement, macroprudential communication, and coordinated regional governance.

stat.apecon.em
econ econ 11-26 00:00

Expectation-enforcing strategies for repeated games

arXiv:2511.19828v1 Announce Type: new Abstract: Originating in evolutionary game theory, the class of "zero-determinant" strategies enables a player to unilaterally enforce linear payoff relationships in simple repeated games. An upshot of this kind of payoff constraint is that it can shape the incentives for the opponent in a predetermined way. An example is when a player ensures that the agents get equal payoffs. While extensively studied in infinite-horizon games, extensions to discounted games, nonlinear payoff relationships, richer strategic environments, and behaviors with long memory remain incompletely understood. In this paper, we provide necessary and sufficient conditions for a player to enforce arbitrary payoff relationships (linear or nonlinear), in expectation, in discounted games. These conditions characterize precisely which payoff relationships are enforceable using strategies of arbitrary complexity. Our main result establishes that any such enforceable relationship can actually be implemented using a simple two-point reactive learning strategy, which conditions on the opponent's most recent action and the player's own previous mixed action, using information from only one round into the past. For additive payoff constraints, we show that enforcement is possible using even simpler (reactive) strategies that depend solely on the opponent's last move. In other words, this tractable class is universal within expectation-enforcing strategies. As examples, we apply these results to characterize extortionate, generous, equalizer, and fair strategies in the iterated prisoner's dilemma, asymmetric donation game, nonlinear donation game, and the hawk-dove game, identifying precisely when each class of strategy is enforceable and with what minimum discount factor.

econ.thcs.gtq-bio.pe
econ econ 11-26 00:00

Solving Heterogeneous Agent Models with Physics-informed Neural Networks

arXiv:2511.20283v1 Announce Type: new Abstract: Understanding household behaviour is essential for modelling macroeconomic dynamics and designing effective policy. While heterogeneous agent models offer a more realistic alternative to representative agent frameworks, their implementation poses significant computational challenges, particularly in continuous time. The Aiyagari-Bewley-Huggett (ABH) framework, recast as a system of partial differential equations, typically relies on grid-based solvers that suffer from the curse of dimensionality, high computational cost, and numerical inaccuracies. This paper introduces the ABH-PINN solver, an approach based on Physics-Informed Neural Networks (PINNs), which embeds the Hamilton-Jacobi-Bellman and Kolmogorov Forward equations directly into the neural network training objective. By replacing grid-based approximation with mesh-free, differentiable function learning, the ABH-PINN solver benefits from the advantages of PINNs of improved scalability, smoother solutions, and computational efficiency. Preliminary results show that the PINN-based approach is able to obtain economically valid results matching the established finite-difference solvers.

cs.lgq-fin.ececon.gn
econ econ 11-26 00:00

Evolutionarily stable strategy in asymmetric games: Dynamical and information-theoretical perspectives

arXiv:2409.19320v4 Announce Type: cross Abstract: Evolutionarily stable strategy (ESS) is the defining concept of evolutionary game theory. It has a fairly unanimously accepted definition for the case of symmetric games which are played in a homogeneous population where all individuals are in same role. However, in asymmetric games, which are played in a population with multiple subpopulations (each of which has individuals in one particular role), situation is not as clear. Various generalizations of ESS defined for such cases differ in how they correspond to fixed points of replicator equation which models evolutionary dynamics of frequencies of strategies in the population. Moreover, some of the definitions may even be equivalent, and hence, redundant in the scheme of things. Along with reporting some new results, this paper is partly indented as a contextual mini-review of some of the most important definitions of ESS in asymmetric games. We present the definitions coherently and scrutinize them closely while establishing equivalences -- some of them hitherto unreported -- between them wherever possible. Since it is desirable that a definition of ESS should correspond to asymptotically stable fixed points of replicator dynamics, we bring forward the connections between various definitions and their dynamical stabilities. Furthermore, we find the use of principle of relative entropy to gain information-theoretic insights into the concept of ESS in asymmetric games, thereby establishing a three-fold connection between game theory, dynamical system theory, and information theory in this context. We discuss our conclusions also in the backdrop of asymmetric hypermatrix games where more than two individuals interact simultaneously in the course of getting payoffs.

econ.thq-bio.penlin.ao
econ econ 11-26 00:00

When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics

arXiv:2511.19548v1 Announce Type: cross Abstract: Neuroeconomics promises to ground welfare analysis in neural and computational evidence about how people value outcomes, learn from experience and exercise self-control. At the same time, policy and commercial actors increasingly invoke neural data to justify paternalistic regulation, "brain-based" interventions and new welfare measures. This paper asks under what conditions neural data can legitimately inform welfare judgements for policy rather than merely describing behaviour. I develop a non-empirical, model-based framework that links three levels: neural signals, computational decision models and normative welfare criteria. Within an actor-critic reinforcement-learning model, I formalise the inference path from neural activity to latent values and prediction errors and then to welfare claims. I show that neural evidence constrains welfare judgements only when the neural-computational mapping is well validated, the decision model identifies "true" interests versus context-dependent mistakes, and the welfare criterion is explicitly specified and defended. Applying the framework to addiction, neuromarketing and environmental policy, I derive a Neuroeconomic Welfare Inference Checklist for regulators and for designers of NeuroAI systems. The analysis treats brains and artificial agents as value-learning systems while showing that internal reward signals, whether biological or artificial, are computational quantities and cannot be treated as welfare measures without an explicit normative model.

cs.lgq-bio.ncq-fin.eccs.cyecon.gncs.ai
econ econ 11-26 00:00

Threshold Tensor Factor Model in CP Form

arXiv:2511.19796v1 Announce Type: cross Abstract: This paper proposes a new Threshold Tensor Factor Model in Canonical Polyadic (CP) form for tensor time series. By integrating a thresholding autoregressive structure for the latent factor process into the tensor factor model in CP form, the model captures regime-switching dynamics in the latent factor processes while retaining the parsimony and interpretability of low-rank tensor representations. We develop estimation procedures for the model and establish the theoretical properties of the resulting estimators. Numerical experiments and a real-data application illustrate the practical performance and usefulness of the proposed framework.

stat.apstat.meecon.em
econ econ 11-26 00:00

Realistic gossip in Trust Game on networks: the GODS model

arXiv:2511.20248v1 Announce Type: cross Abstract: Gossip has been shown to be a relatively efficient solution to problems of cooperation in reputation-based systems of exchange, but many studies don't conceptualize gossiping in a realistic way, often assuming near-perfect information or broadcast-like dynamics of its spread. To solve this problem, we developed an agent-based model that pairs realistic gossip processes with different variants of Trust Game. The results show that cooperators suffer when local interactions govern spread of gossip, because they cannot discriminate against defectors. Realistic gossiping increases the overall amount of resources, but is more likely to promote defection. Moreover, even partner selection through dynamic networks can lead to high payoff inequalities among agent types. Cooperators face a choice between outcompeting defectors and overall growth. By blending direct and indirect reciprocity with reputations we show that gossiping increases the efficiency of cooperation by an order of magnitude.

econ.thcs.macs.cycs.siphysics.soc-ph
econ econ 11-26 00:00

Keeping in Place After the Storm-Emergency Assistance and Evictions

arXiv:2505.14548v3 Announce Type: replace Abstract: We offer evidence that federal emergency assistance (FEMA) in the days following natural disasters mitigate evictions in comparison to similar emergency scenarios where FEMA aid is not provided. We find an approximate 10.9% increase in overall evictions after hurricane natural disaster events driven in large part by areas in close proximity of the hurricane path that do not receive FEMA rental assistance. Furthermore, we also show that FEMA aid acts as a liquidity buffer to other forms of emergency credit, specifically we find that both transactions volumes and defaults decrease during hurricane events in locations that do receive FEMA aid. This effect largely reverses in areas that do not receive FEMA aid, where the magnitude of transaction volumes drop by less and default rates remain similar relative to the baseline. Overall, this suggests that the availability of emergency liquidity during natural disaster events is indeed a binding constraint with real household financial consequences, in particular through our documented channel of evictions and in usage of high-cost credit.

q-fin.ececon.gn
econ econ 11-26 00:00

Simulating Macroeconomic Expectations using LLM Agents

arXiv:2505.17648v4 Announce Type: replace Abstract: We introduce a novel framework for simulating macroeconomic expectations using LLM Agents. By constructing LLM Agents equipped with various functional modules, we replicate three representative survey experiments involving several expectations across different types of economic agents. Our results show that although the expectations simulated by LLM Agents are more homogeneous than those of humans, they consistently outperform LLMs relying simply on prompt engineering, and possess human-like mental mechanisms. Evaluation reveals that these capabilities stem from the contributions of their components, offering guidelines for their architectural design. Our approach complements traditional methods and provides new insights into AI behavioral science in macroeconomic research

q-fin.ececon.gncs.ai
econ econ 11-26 00:00

Multiple Randomization Designs: Estimation and Inference with Interference

arXiv:2112.13495v2 Announce Type: replace-cross Abstract: In this study we introduce a new class of experimental designs. In a classical randomized controlled trial (RCT), or A/B test, a randomly selected subset of a population of units (e.g., individuals, plots of land, or experiences) is assigned to a treatment (treatment A), and the remainder of the population is assigned to the control treatment (treatment B). The difference in average outcome by treatment group is an estimate of the average effect of the treatment. However, motivating our study, the setting for modern experiments is often different, with the outcomes and treatment assignments indexed by multiple populations. For example, outcomes may be indexed by buyers and sellers, by content creators and subscribers, by drivers and riders, or by travelers and airlines and travel agents, with treatments potentially varying across these indices. Spillovers or interference can arise from interactions between units across populations. For example, sellers' behavior may depend on buyers' treatment assignment, or vice versa. This can invalidate the simple comparison of means as an estimator for the average effect of the treatment in classical RCTs. We propose new experiment designs for settings in which multiple populations interact. We show how these designs allow us to study questions about interference that cannot be answered by classical randomized experiments. Finally, we develop new statistical methods for analyzing these Multiple Randomization Designs.

stat.thmath.ststat.mecs.siecon.em
astro-ph astro-ph 11-26 00:00

Measuring Cometary Nuclei Behind Bright Comae: PSF Delta Decomposition with Bicubic Resampling and an Application to Interstellar Comet 3I/ATLAS C/2025 N1

arXiv:2511.19467v1 Announce Type: new Abstract: Measuring cometary nuclei is notoriously difficult because they are usually unresolved and embedded within bright comae, which hampers direct size measurements even with space telescopes. We present a practical, instrumental method that, stabilises the inner core through bicubic resampling, performs forward point-spread function PSF+convolution, and separates the unresolved nucleus from the inner-coma profile via an explicit Dirac Delta function added to a Rho^-1 surface brightness law. The method yields the nucleus flux by fitting an azimuthal averaged profile with two amplitudes only PSF core and convolved coma, with transparent residual diagnostics. As a case study, we apply the workflow to the interstellar comet 3I/ATLAS alias C/2025 N1, incorporating Hubble Space Telescope constraints on the nucleus size. We find that radius solutions consistent with 0.16 <= Rn <= 2.8 km for Pv = 0.04 are naturally recovered, in line with the most recent HST upper limits. The approach is well-suited for survey pipelines Rubin LSST and targeted follow up.

astro-ph.srastro-ph.epastro-ph.im
astro-ph astro-ph 11-26 00:00

Sequential Convex Programming for Multimode Spacecraft Trajectory Optimization

arXiv:2511.19505v1 Announce Type: new Abstract: Spacecraft equipped with multiple propulsion modes or systems can offer enhanced performance and mission flexibility compared with traditional configurations. Despite these benefits, the trajectory optimization of spacecraft utilizing such configurations remains a complex challenge. This paper presents a sequential convex programming (SCP) approach for the optimal design of multi-mode and multi-propulsion spacecraft trajectories. The method extends the dynamical linearization within SCP using sparse automatic differentiation, enabling efficient inclusion of multiple propulsion modes or systems without complex manual reformulation while maintaining comparable computational efficiency. New constraint formulations are introduced to ensure selection of a single propulsion mode at each time step and limit the total number of modes used. The approach is demonstrated for (i) a low-thrust Earth-67P rendezvous using the SPT-140 thruster with 20 discrete modes, and (ii) an Earth-Mars transfer employing both a low-thrust engine and a solar sail. Results confirm that the proposed method can efficiently compute optimal trajectories for these scenarios.

astro-ph.epastro-ph.imcs.syeess.sy
astro-ph astro-ph 11-26 00:00

Constraining properties of dust formed in Wolf-Rayet binary WR 112 using mid-infrared and millimeter observations

arXiv:2511.19572v1 Announce Type: new Abstract: Binaries that host a carbon-rich Wolf-Rayet (WC) star and an OB-type companion can be copious dust producers. Yet the properties of dust, particularly the grain size distribution, in these systems remain uncertain. We present Band 6 observations of WR 112 by the Atacama Large Millimeter/submillimeter Array telescope (ALMA), which are the first millimeter observations of a WC binary system capable of resolving its dust emission. By combining ALMA observations with James Webb Space Telescope (JWST) images, we were able to analyze the spatially resolved spectral energy distribution (SED) of WR 112. We found that the SEDs are consistent with emissions from hydrogen-poor amorphous carbon grains. Notably, our results also suggest that the majority of grains in the system have radii below one micrometer, and the extended dust structures are dominated by nanometer-sized grains. Among four parameterizations of the grain radius distribution that we tested, a bimodal distribution, with abundant nanometer-sized grains and a secondary population of 0.1-micron grains, best reproduces the observed SED. This bimodal distribution helps to reconcile the previously conflicting grain size estimates reported for WR 112 and for other WC systems. We hypothesize that dust destruction mechanisms such as radiative torque disruption and radiative-driven sublimation are responsible for driving the system to the bimodal grain size distribution.

astro-ph.srastro-ph.epastro-ph.ga
astro-ph astro-ph 11-26 00:00

Chemical and Isotopic Homogeneity Between the L Dwarf CD-35 2722 B and its Early M Host Star

arXiv:2511.19588v1 Announce Type: new Abstract: CD-35 2722 B is an L dwarf companion to the nearby, $\sim 50-200$ Myr old M1 dwarf CD-35 2722 A. We present a detailed analysis of both objects using high-resolution ($R \sim 35,000$) $K$ band spectroscopy from the Keck Planet Imager and Characterizer (KPIC) combined with archival photometry. With a mass of $30^{+5}_{-4} M_{\mathrm{Jup}}$ (planet-to-host mass ratio 0.05) and projected separation of $67\pm4$ AU from its host, CD-35 2722 B likely formed via gravitational instability. We explore whether the chemical composition of the system tells a similar story. Accounting for systematic uncertainties, we find $\mathrm{[M/H]}=-0.16^{+0.03}_{-0.02} \mathrm{(stat)} \pm 0.25 \mathrm{(sys)}$ dex and $^{12}\mathrm{C}/^{13}\mathrm{C}=132^{+20}_{-14}$ for the host, and $\mathrm{[M/H]}=0.27^{+0.07}_{-0.06} (\mathrm{stat}) \pm 0.12 (\mathrm{sys})$ dex, $^{12}\mathrm{CO}/^{13}\mathrm{CO}=159^{+33}_{-24} \mathrm{(stat)}^{+40}_{-33} \mathrm{(sys)}$, and $\mathrm{C/O} = 0.55 \pm 0.01 (\mathrm{stat}) \pm 0.04 (\mathrm{sys})$ for the companion. The chemical compositions for the brown dwarf and host star agree within the $1.5\sigma$ level, supporting a scenario where CD-35 2722 B formed via gravitational instability. We do not find evidence for clouds on CD-35 2722 B despite it being a photometrically red mid-L dwarf and thus expected to be quite cloudy. We retrieve a temperature structure which is more isothermal than models and investigate its impact on our measurements, finding that constraining the temperature structure to self-consistent models does not significantly impact our retrieved chemical properties. Our observations highlight the need for data from complementary wavelength ranges to verify the presence of aerosols in likely cloudy L dwarfs.

astro-ph.srastro-ph.ep
astro-ph astro-ph 11-26 00:00

Mind the Information Gap: Unveiling Detailed Morphologies of z 0.5-1.0 Galaxies with SLACS Strong Lenses and Data-Driven Analysis

arXiv:2511.19595v1 Announce Type: new Abstract: We present new state-of-the-art lens models for strong gravitational lensing systems from the Sloan Lens ACS (SLACS) survey, developed within a Bayesian framework that employs high-dimensional (pixellated), data-driven priors for the background source, foreground lens light, and point-spread function (PSF). Unlike conventional methods, our approach delivers high-resolution reconstructions of all major physical components of the lensing system and substantially reduces model-data residuals compared to previous work. For the majority of 30 lensing systems analyzed, we also provide posterior samples capturing the full uncertainty of each physical model parameter. The reconstructions of the background sources reveal high significance morphological structures as small as 200 parsecs in galaxies at redshifts of z 0.5-1.0, demonstrating the power of strong lensing and the analysis method to be used as a cosmic telescope to study the high redshift universe. This study marks the first application of data-driven generative priors to modeling real strong-lensing data and establishes a new benchmark for strong lensing precision modeling in the era of large-scale imaging surveys.

astro-ph.coastro-ph.gaastro-ph.im
astro-ph astro-ph 11-26 00:00

Towards Reconciling Reionization with JWST: The Role of Bright Galaxies and Strong Feedback

arXiv:2511.19600v1 Announce Type: new Abstract: The elevated UV luminosity functions (UVLF) from recent James Webb Space Telescope (JWST) have challenged the viability of existing theoretical models. To address this, we use a semi-analytical framework -- which couples a physically motivated source model derived from radiative-transfer hydrodynamic simulations of reionization with a Markov Chain Monte Carlo sampler -- to perform a joint calibration to JWST galaxy surveys (UVLF, $\phi_{\rm UV}$ and UV luminosity density, $\rho_{\rm UV}$) and reionization-era observables (ionizing emissivity, $\dot{N}_{\rm ion}$, neutral hydrogen fraction, $x_{\rm HI}$, and Thomson optical depth, $\tau$). We find that models with weak feedback and a higher contribution from faint galaxies reproduce the reionization observables but struggle to match the elevated JWST UVLF at $z > 9$. In contrast, models with stronger feedback (i.e., rapid redshift evolution) and a higher contribution from bright galaxies successfully reproduce JWST UVLF at $z \geq 10$, but over-estimate the bright end at $z < 9$. The strong-feedback model constrained by JWST UVLF predicts a more gradual and extended reionization history, as opposed to the sudden reionization seen in the weak-feedback models. This extended nature of reionization ($z\sim 16$ - $6$) yields an optical depth consistent (at 2-$\sigma$) with the Cosmic Microwave Background (CMB) constraint, thereby alleviating the photon-budget crisis. In both scenarios, reionization is complete by $z \sim 6$, consistent with current data. Our analysis highlights the importance of accurately modeling feedback and ionizing emissivities from different source populations during the first billion years after the Big Bang.

astro-ph.coastro-ph.ga
astro-ph astro-ph 11-26 00:00

Tracing AGN-Galaxy Co-Evolution with UV Line-Selected Obscured AGN

arXiv:2511.19602v1 Announce Type: new Abstract: Understanding black hole-galaxy co-evolution and the role of AGN feedback requires complete AGN samples, including heavily obscured systems. In this work, we present the first UV line-selected ([Nev]3426 and CIV1549) sample of obscured AGN with full X-ray-to-radio coverage, assembled by combining data from the Chandra COSMOS Legacy survey, the COSMOS2020 catalogue, IR photometry from XID+, and radio observations from the VLA and MIGHTEE surveys. Using CIGALE to perform spectral energy distribution (SED) fitting, we analyse 184 obscured AGN at 0.6 < z < 1.2 and 1.5 < z < 3.1, enabling detailed measurements of AGN and host galaxy properties, and direct comparison with SIMBA hydrodynamical simulations. We find that X-ray and radio data are essential for accurate SED fits, with the radio band proving critical when X-ray detections are missing or in cases of poor IR coverage. Comparisons with matched non-active galaxies and simulations suggest that the [NeV]-selected sources are in a pre-quenching stage, while the CIV-selected ones are likely quenched by AGN activity. Our results indicate that [NeV] and CIV selections target galaxies in a transient phase of their co-evolution, characterised by intense, obscured accretion, and pave the way for future extensions with upcoming large area high-z spectroscopic surveys.

astro-ph.heastro-ph.ga
astro-ph astro-ph 11-26 00:00

In medio stat virtus: enrichment history in poor galaxy clusters

arXiv:2511.19603v1 Announce Type: new Abstract: The enrichment history of galaxy clusters and groups remains far from being fully understood. Recent measurements in massive clusters have revealed remarkably flat iron abundance profiles out to the outskirts, suggesting that similar enrichment processes have occurred for all systems. In contrast, abundance profiles in galaxy groups have sometimes been measured to decline with radius, challenging our understanding of the physical processes at these scales. In this paper, we present a pilot study aimed at accurately measuring the iron abundance profiles of MKW3s, A2589, and Hydra A, three poor clusters with total masses of $M_{500} \simeq 2.0-2.5 \times 10^{14}$ M$_\odot$, intermediate between the scales of galaxy groups and massive clusters. Using XMM-Newton to obtain nearly complete azimuthal coverage of the outer regions of these systems, we show that abundance measurements in the outskirts are more likely to be limited by systematics than by statistical errors. In particular, inaccurate modelling of the soft X-ray background can significantly bias metallicity estimates in regions where the cluster emission is faint. Once these systematics are properly accounted for, the abundance profiles of all three clusters appear to be flat at $Z \sim 0.3$ Z$_{\odot}$, in agreement with values observed in massive clusters. Using available stellar mass estimates, we also computed their iron yields, thereby beginning to probe a largely unexplored mass range. We find $Y_{Fe,500} = 2.68\pm0.34$, $2.54\pm0.64$, and $7.51\pm1.47$ Z$_{\odot}$ for MKW3s, A2589, and Hydra A, respectively, spanning the transition regime between galaxy groups and massive clusters. Future observations of systems with temperatures of $2-4$ keV will be essential to further populate this intermediate-mass regime and to draw firmer conclusions on the chemical enrichment history of galaxy systems across the full mass scale.

astro-ph.coastro-ph.ga
astro-ph astro-ph 11-26 00:00

SE3D: Testing the recovery of stellar population, dust and structural properties on mock-observed toy model and simulated galaxies

arXiv:2511.19614v1 Announce Type: new Abstract: The translation from direct observables to physical properties of galaxies is a key step in reconstructing their evolutionary histories. Variations in stellar populations and star-dust geometry can induce inhomogeneous mass-to-light ratios, complicating this process. SE3D is a novel modelling framework, built around a radiative transfer emulator, aimed at tackling this problem. In this paper, we test the ability of SE3D to recover known intrinsic properties of toy model and TNG50 simulated galaxies from mock observations of their multi-wavelength photometric and structural properties. We find an encouraging performance for several key characteristics, including the bulk stellar mass, dust mass and SFR, as well as their respective radial extents. We point out limitations, and investigate the impact of various sources of model mismatch. Among them, mismatch in the shapes of star formation histories contributes most, with radial and azimuthal structure and stellar metallicity distributions playing a progressively more minor role. We also analyse the evolution from z=2 to z=0 of resolved stellar and dust properties of TNG galaxies, as measured intrinsically and expressed in their distribution across UVJ and IRX-$\beta$ diagnostic diagrams. We test different methods to assign dust to the simulation, and find a persistent lack of Mdust/Mstar evolution and a more limited dynamic range across the diagnostic diagrams compared to observations.

astro-ph.gaastro-ph.im
astro-ph astro-ph 11-26 00:00

$\nabla\cdot{B}=0$ versus the Universe

arXiv:2511.19615v1 Announce Type: new Abstract: We implement the constrained divergence cleaning algorithm of \citet{Tricco2016} into the cosmological smoothed particle magnetohydrodynamics (SPMHD) code OpenGadget3. Our implementation modifies the governing equations of SPMHD to allow the constrained hyperbolic/parabolic cleaning scheme to be applied consistently in an expanding cosmological framework. This ensures that divergence errors in the magnetic field are actively propagated away and damped, rather than merely being advected with the flow or partially controlled by source terms. To validate our implementation, we perform a series of standard test problems, including the advection of divergence errors, the Orszag-Tang vortex, the Brio-Wu shock tube, and magnetised Zeldovich pancakes. These tests confirm that our scheme successfully reduces divergence errors while preserving the correct physical evolution of the system. We then apply the method to a fully cosmological simulation of a massive galaxy cluster, comparing the results to those obtained using the previously employed Powell eight-wave divergence preserving scheme. We find that the overall density structure of the cluster is largely unaffected by the choice of divergence cleaning method, and the magnetic field geometry and strengths in the cluster core remain similar. However, in the cluster outskirts ($r \approx$~1-3~$h^{-1}$~Mpc), the magnetic field is amplified by a factor of $\sim$ 5 compared to the Powell-only approach. Moreover, the constrained divergence cleaning algorithm reduces the divergence error by 2-3 orders of magnitude throughout the cluster volume, demonstrating its effectiveness in maintaining the solenoidal condition of the magnetic field in large-scale cosmological simulations. Our results suggest that accurate divergence control is essential for modeling magnetic field amplification in low-density regions of galaxy clusters.

astro-ph.coastro-ph.gaastro-ph.im
astro-ph astro-ph 11-26 00:00

Detection of the Cosmological 21 cm Signal in Auto-correlation at z ~ 1 with the Canadian Hydrogen Intensity Mapping Experiment

arXiv:2511.19620v1 Announce Type: new Abstract: We present the first detection of the cosmological 21 cm intensity mapping signal in auto-correlation at z ~ 1 with the Canadian Hydrogen Intensity Mapping Experiment (CHIME). Using 94 nights of observation, we have measured the 21 cm auto-power spectrum over a frequency range from 608.2 MHz to 707.8 MHz (z = 1.34 to 1.01) at 0.4 h Mpc^-1 < k < 1.5 h Mpc^-1, with a detection significance of 12.5 sigma. Our analysis employs significant improvements to the CHIME data processing pipeline compared to previous work, including novel radio frequency interference (RFI) detection and masking algorithms, achromatic beamforming techniques, and foreground filtering before time averaging to minimize spectral leakage. We establish the robustness and reliability of our detection through a comprehensive suite of validation tests. We also measure the 21 cm signal in two independent sub-bands centered at z ~ 1.08 and z ~ 1.24 with detection significance of 8.7 sigma and 9.2 sigma, respectively. We briefly discuss the theoretical interpretation of these measurements in terms of a power spectrum model, deferring the details to a companion paper. This auto-power spectrum detection demonstrates CHIME's capability to probe large-scale structure through 21 cm intensity mapping without reliance on external galaxy surveys.

astro-ph.coastro-ph.gaastro-ph.im
astro-ph astro-ph 11-26 00:00

SE3D: Building a radiative transfer emulator to fit panchromatic resolved galaxy observations with 3D models of dust and stars

arXiv:2511.19623v1 Announce Type: new Abstract: We present a framework for analysing panchromatic and spatially resolved galaxy observations, dubbed SE3D. SE3D simultaneously and self-consistently models a galaxy's spectral energy distribution and its spectral distributions of global structural parameters: the wavelength-dependent galaxy size, light profile and projected axis ratio. To this end, it employs a machine learning emulator trained on a large library of toy model galaxies processed with 3D dust radiative transfer and mock-observed under a range of viewing angles. The toy models vary in their stellar and dust geometries, and include radial stellar population gradients. The computationally efficient machine learning emulator uses a Bayesian neural network architecture, and reproduces the spectral distributions at an accuracy of ~ 0.05 dex or less across the dynamic range of input parameters, and across the rest-frame UVJ colour space spanned by observed galaxies. We carry out a sensitivity analysis demonstrating that the emulator has successfully learned the intricate mappings between galaxy physical properties and direct observables (fluxes, colours, sizes, size ratios between different wavebands, ...). We further discuss the physical conditions giving rise to a range of total-to-selective attenuation ratios, Rv, with among them most prominently the projected dust surface mass density.

astro-ph.gaastro-ph.im
astro-ph astro-ph 11-26 00:00

Planetary Habitability Under the Light of a Rapidly Changing Star

arXiv:2511.19646v1 Announce Type: new Abstract: Planetary atmospheric energy budgets primarily depend on stellar incident flux. However, stellar variability can have major consequences for the evolution of planetary climates. In this work, we evaluate how stellar variability influences the equilibrium temperature and water retention of planets within the Habitable Zone (HZ). We present a sample of 9 stars that are known to host at least one planet within the HZ and that were identified to have a variability amplitude exceeding 100 ppm based on photometry from the Transiting Exoplanet Survey Satellite (TESS). We investigate the effect that the variability of these stars have on the insolation flux of their HZ planets and the resulting changes in the induced planetary equilibrium temperature. Our results show that for the stars in our sample, the stellar variability has an insignificant effect on the equilibrium temperature of HZ planets. However, we also emphasize that these stars are not representative of more extreme variable stars, since exoplanets are more difficult to detect and characterize in the presence of extreme variability. We also investigate the equilibrium temperature and long-term evolution of a hypothetical Earth-like planet placed at the inner edge of the HZ around a highly variable star. We found that the water loss rates are comparable between both variable and quiet host stars for Earth-like planets in the inner HZ. Overall, these results broaden our knowledge on the impact of stellar variability on planetary habitability.

astro-ph.srastro-ph.ep
q-bio q-bio 11-26 00:00

A Novel Brain-Computer Interface Architecture: The Brain-Muscle-Hand Interface for replicating the motor pathway

arXiv:2506.02013v2 Announce Type: replace Abstract: Myoelectric interfaces enable intuitive and natural control by decoding residual muscle activity, providing an effective pathway for motor restoration in individuals with preserved musculature. However, in patients with severe muscular atrophy or high-level spinal cord injury, the absence of reliable muscle activity renders myoelectric control infeasible. In such cases, motor brain-computer interfaces (BCIs) offer an alternative route. However, conventional brain-computer interface systems rely mainly on noisy cortical signals and classification-based decoding algorithms, which often result in low signal fidelity, limited controllability, and unstable real-time performance. Inspired by the motor pathway--an evolutionarily optimized system that filters, integrates, and transmits motor commands from the brain to the muscles--this study proposes the Brain-Muscle-Hand Interface (BMHI). BMHI decodes cortical EEG signals to reconstruct muscle-level EMG activity, functionally substituting for the muscles and enabling regression-based, continuous, and natural control via a myoelectric interface. To validate this architecture, we performed offline verification, comparative analysis, and online control experiments. Results demonstrate that: (1) the BMHI achieves a prediction accuracy of 0.79; (2) compared with conventional end-to-end brain-hand interfaces, it reduces training time by approximately eighteenfold while improving decoding accuracy; and (3) in online operation, the BMHI enables stable and efficient manipulation of both a virtual hand and a robotic arm. Compared with conventional BCIs, the BMHI, by replicating the motor pathway, enables continuous, stable, and naturally intuitive control.

q-bio.nc
q-bio q-bio 11-26 00:00

Blini: lightweight nucleotide sequence search and dereplication

arXiv:2511.19769v1 Announce Type: new Abstract: Blini is a tool for quick lookup of nucleotide sequences in databases, and for quick dereplication of sequence collections. It is meant to help clean and characterize large collections of assembled contigs or long sequences that would otherwise be too big to search with online tools, or too demanding for a local machine to process. Benchmarks on simulated data demonstrate that it is faster than existing tools and requires less RAM, while preserving search and clustering accuracy.

q-bio.qm
q-bio q-bio 11-26 00:00

Spanning Tree Basis for Unbiased Averaging of Network Topologies

arXiv:2511.19894v1 Announce Type: new Abstract: In recent years there has been a paradigm shift from the study of local task-related activation to the organization and functioning of large-scale functional and structural brain networks. However, a long-standing challenge in this large-scale brain network analysis is how to compare network organizations irrespective of their complexity. The maximum spanning tree (MST) has served as a simple, unbiased, standardized representation of complex brain networks and effectively addressed this long-standing challenge. This tree representation, however, has been limited to individual networks. Group-level trees are always constructed from the average network or through a bootstrap procedure. Constructing the group-level tree from the average network introduces bias from individual subjects with outlying connectivities. The bootstrap method can be computationally prohibitive if a good approximation is desired. To address these issues, we propose a novel spectral representation of trees using the spanning tree basis. This spectral representation enables us to compute the average MST and demonstrate that this average tree captures the global properties of all the MSTs in the group and also overlaps with the union of the shortest paths in the functional brain networks.

q-bio.qm
q-bio q-bio 11-26 00:00

Hormonal Regulation of Breast Cancer Incidence Dynamics: A Mathematical Analysis Explaining the Clemmesen's Hook

arXiv:2511.19964v1 Announce Type: new Abstract: Clemmesen's hook refers to a commonly observed slowdown and rebound in breast cancer incidence around the age at menopause. It suggests a shift in the underlying carcinogenic dynamics, but the mechanistic basis remains poorly understood. Building on our previously developed Extended Multistage Clonal Expansion Tumor (MSCE-T) model, we perform a theoretical analysis to determine the conditions under which Clemmesen's hook would occur. Our results show that Clemmesen's hook can be quantitatively explained by time-specific changes in the proliferative and apoptotic balance of early-stage mutated cell populations, corresponding to the decline in progesterone levels and progesterone-driven proliferation due to reduced menstrual cycles preceding menopause, and changing dominant carcinogenic impact from alternative growth pathways post-menopause (e.g., adipose-derived growth signals). In contrast, variation in last-stage clonal dynamics cannot effectively reproduce the observed non-monotonic incidence pattern. Analytical results further demonstrate that midlife incidence dynamics corresponding to the hook are governed primarily by intrinsic proliferative processes rather than detection effects. Overall, this study provides a mechanistic and mathematical explanation for Clemmesen's hook and establishes a quantitative framework linking hormonal transitions during menopause to age-specific breast cancer incidence curve.

q-bio.pe
q-bio q-bio 11-26 00:00

Plumbing Analog of Molecular Computation

arXiv:2511.20339v1 Announce Type: new Abstract: Biological information processing often arises from mesoscopic molecular systems operating far from equilibrium, yet their complexity can make the underlying principles difficult to visualize. In this study, we introduce a macroscopic hydraulic model that serves as an intuitive analog for the molecular switching behavior exhibited by G protein- coupled receptors (GPCRs) on the cell membrane. The hydraulic system reproduces the essential structural and functional features of the molecular switch, including the presence of up to three distinct steady state solutions, the characteristic shapes of these solutions, and the physical interpretation of the control parameters governing the behavior of the system. By mapping water flow, energy barrier height, and siphoning dynamics onto biochemical flux, activation energy, and state transitions, the model provides a transparent representation of the mechanisms that regulate GPCR activation. The correspondence between the hydraulic analog and the molecular system suggests several experimentally testable hypotheses about GPCR function. In particular, the model highlights the central role of energy flux, driven by imbalances in ATP/ADP or GTP/GDP concentrations, in activating the molecular switch and maintaining nonequilibrium signaling states. It also identifies two key parameters that primarily determine switch behavior: the energy difference between the active and inactive states and the effective height of the energy barrier that separates them. These results imply that GPCR signaling dynamics may be governed by generalizable physical principles rather than by biochemical details alone. The hydraulic framework thus offers a tractable platform for interpreting complex molecular behavior and may aid in the development of predictive models of GPCR function in diverse physiological contexts.

q-bio.sc
physics physics 11-26 00:00

Constructing a Unified Model of Community Formation in Community-Supported Agriculture: Insights from Consumer and Producer Pathways in Japan

arXiv:2511.19459v1 Announce Type: new Abstract: Community Supported Agriculture (CSA) has been recognized globally as a promising framework that embeds agriculture within social relations, yet its diffusion remains limited in contexts such as Japan. Existing studies have largely focused on either consumer or producer participation in isolation, offering fragmented insights and leaving unexplored how their reciprocal processes jointly shape CSA communities. This study addresses this gap by integrating the trajectories of both groups into a comprehensive account of CSA community formation. Drawing on semi-structured interviews with ten CSA producers and ten consumers, we employed the Modified Grounded Theory Approach (M-GTA) to inductively theorize processes of participation and practice. The analysis showed that producers advance CSA through internal adjustments and sense-making to cope with uncertainties, while consumers are guided by life events, practical skills, and prior purchasing experiences toward participation. Synthesizing these insights, we propose a six-phase model of CSA community formation, dispersed interest, awareness, interest formation, motivation, practice, and co-creative continuation, that demonstrates how producers, consumers, and intermediaries interact across stages. The model highlights the pivotal role of key players in sustaining engagement and provides a new perspective for institutionalizing CSA as a durable component of sustainable food systems.

physics.soc-ph
physics physics 11-26 00:00

Causal spillover effects of electric vehicle charging station placement on local businesses: a staggered adoption study

arXiv:2511.19507v1 Announce Type: new Abstract: Understanding the economic impacts of the placement of electric vehicle charging stations (EVCSs) is crucial for planning infrastructure systems that benefit the broader community. Theoretical models have been used to predict human behavior during charging events, however, these models have often neglected the complexity of trip patterns, and have underestimated the real-world impacts of such infrastructure on the local economy. In this paper, we design a quasi-experiment using mobile phone GPS location and EVCS deployment history data to analyze the causal impact of EVCS placement on visitation patterns to businesses. More specifically, we leverage the staggered placement of EVCSs in New York City and California Bay Area to match treated and control businesses that share similar characteristics including the business sector, location, and pre-treatment visitation count. By comparing three alternative matching strategies, we show that staggered adoption avoids selecting controls from non-treated clusters, and yields greater spatial overlap in dense urban areas. We find that EVCS installations significantly increase customer traffic, with effects concentrated in recreational venues in New York City and routine destinations such as groceries, pharmacies, and cafes in California Bay Area. Our results suggest that the economic spillovers of EVCSs vary across urban contexts and highlight the effectiveness of leveraging the staggered nature of adoption timings for evaluating infrastructure impacts in heterogeneous urban environments.

physics.soc-ph
physics physics 11-26 00:00

Placental contractions in uncomplicated pregnancies

arXiv:2511.19547v1 Announce Type: new Abstract: We first described the utero-placental pump phenomenon, in utero, in 2020. We have recruited 36 healthy pregnant women to undergo magnetic resonance imaging (MRI) between 29 and 42 weeks of pregnancy to further explore this occurrence in a single centre prospective observational study. Participants had fetal ultrasound to confirm normal growth. Dynamic MRI was acquired for between 15 and 32 minutes using respiratory triggered, multi-slice, single shot, gradient echo, echo planar imaging covering the whole uterus. All participants had a live birth of a healthy baby weighing over the 10th centile for gestational age and no conditions associated with placental dysfunction e.g. pre-eclampsia. There were no cases of severe maternal or fetal villous malperfusion on placental histopathology. Visible contractions were recorded for all participants who completed MRI scans. Contractions involving a decrease in placental volume >10% were classified as either placental or uterine by visual observation. Placental contractions occurred more frequently than uterine contractions (p=0.0061), were associated with a larger increase in the surface area of the uterine wall not covered by the placenta (p=0.0015), sphericity of the placenta(p<0.0001) and longer durations (p=0.0151). Contractions led to an increase in the MRI parameter R2* in the placenta. There was large variation both between participants and between contractions from the same individual, in terms of the time course and features of contractions. Rate, duration and other features of contractions did not apparently change across the gestational age range studied, although the largest fractional volume changes were detected at early gestation. We found that placental contractions occurred in at least 60% of our healthy pregnant population with a median frequency of 2 per hour and median duration of 2.4 minutes.

physics.med-ph
physics physics 11-26 00:00

Infinite self energy?

arXiv:2511.19571v1 Announce Type: new Abstract: The notion of an infinite electromagnetic self energy of point charges (presumably electrons) is accepted by many electromagnetic textbooks. See, for instance,\cite{jdj,dg,rf}. However, each of these sources acknowledge that they don't understand that result. In this paper, we show that electrons must be point particles with no electromagnetic self energy.

physics.class-ph
physics physics 11-26 00:00

Mechanical Design of the PIP-II ORBUMP Pulsed Dipole Magnet

arXiv:2511.19658v1 Announce Type: new Abstract: The Proton Improvement Plan II (PIP-II) project is a vital upgrade to the Fermilab accelerator complex. The magnet pulse rate of the PIP-II Injection system requires an increase from the current rate of 15 Hz to 20 Hz as well as a roughly 30% increase in the magnetic field of the new Orbital Bump (ORBUMP) pulsed dipole magnets in the Booster. The ORBUMP magnet mechanical design is presented in this paper. The ORBUMP magnet is secured in a vacuum box and the core is made up of 0.127 mm thick, low carbon steel laminations with a C-5 inorganic magnesium phosphate coating. The core is clamped using external tie bars welded to the core end plates. ANSYS Finite Element Analysis (FEA) was used to evaluate the clamping design to minimize the deflection of the core post welding of the tie bars. The water-cooled, single turn coil, which shapes the magnetic field by acting as the pole tips, is critical for the integrated field homogeneity. The coil manufacturing tolerances and fabrication techniques were evaluated to ensure the magnetic properties of the magnet could be obtained. The coil is electrically isolated from the core using virgin Polyether ether ketone (PEEK) insulating material in the gap. An investigation into the high voltage performance of the virgin PEEK insulator was conducted via partial discharge testing using a 1:1 scale sample.

physics.acc-ph
physics physics 11-26 00:00

Analog Signal Multiplexing System for the Iota Proton Injector

arXiv:2511.19772v1 Announce Type: new Abstract: The Fermilab Accelerator Science and Technology (FAST) Facility at FNAL is a dedicated research and development center focused on advancing particle accelerator technologies for future applications worldwide. Currently, a key objective of FAST Operations is to commission the 2.5 MeV IOTA Proton Injector (IPI) and enable proton injection into the Integrable Optics Test Accelerator (IOTA) storage ring. The low and medium-energy sections of the IPI include four frame-style dipole trims and two multi-function correctors with independently controlled coils, requiring readout of 32 analog channels for current and voltage monitoring in total. To reduce cost and optimize rack space within the PLC-based control system, a 32-to-4 analog signal multiplexing system was designed and implemented. This system enables real-time readback of excitation parameters from all magnetic correctors. This paper presents the design, construction, implementation, and performance of the multiplexing system.

physics.acc-ph
physics physics 11-26 00:00

Direct readout of excited state lifetimes in chlorin chromophores under electronic strong coupling

arXiv:2511.19786v1 Announce Type: new Abstract: The mechanisms governing molecular photophysics under electronic strong coupling (ESC) remain elusive to date. Here, we use ultrafast pump-probe spectroscopy to study the nonradiative excited state relaxation dynamics of chlorin e6 trimethyl ester (Ce6T) under strong coupling of its transition from the electronic ground state to the Qy band. We use dichroic Fabry-P\'erot cavities to provide a transparent spectral window in which we can directly track the excited state population following optical pumping of either the strongly-coupled Qy band or the higher-lying B band. This scheme circumvents many of the optical artifacts inherent in ultrafast cavity measurements and allows for facile comparison of strongly-coupled measurements with extracavity controls. We observe no significant changes in excited state lifetimes for any optical pumping schemes or cavity coupling conditions considered herein. These results suggest that Ce6T exhibits identical photophysics under ESC and in free space, presenting a new data point for benchmarking emerging theories for cavity photochemistry.

physics.chem-ph
physics physics 11-26 00:00

Controllable Bistability in Dual-Fiber Optical Trap in Air

arXiv:2511.19804v1 Announce Type: new Abstract: The dual-fiber optical trap, owing to its high sensitivity and facile miniaturization, holds significant actual application value in fields such as high-precision metrology of mechanical quantities and biological manipulation. The positional stability of the trapped particle is pivotal to system performance, directly setting the measurement noise floor and operational precision. In this work, we observe bistability and hysteresis in the axial equilibrium position of a 10-um diameter SiO2 microsphere. This bistability arises from optical interference between the fiber ends and the microsphere, creating multiple potential wells. Experimental results demonstrate that the microsphere's transition rate can be effectively modulated through precise control of the trapping laser power. Furthermore, the incorporation of transverse misalignment has effectively eradicated bistability, thereby substantially improving positional stability throughout the entire optical trapping region. This suppression successfully reduced the system's residual positional uncertainty to the thermal noise limit. Consequently, this research will enhance the precision of microparticle manipulation and the sensitivity of sensing in dual-fiber optical trap systems.

physics.optics
math math 11-26 00:00

Some Generalizations of Totient Function with Elementary Symmetric Sums

arXiv:2511.19502v1 Announce Type: new Abstract: We generalize certain totient functions using elementary symmetric polynomials and derive explicit product forms for the totient functions involving the second elementary symmetric sum. This work follows from the work of Toth [The Ramanujan Journal, 2022] where the totient function was generalized using the first and the kth elementary symmetric polynomial. We also provide some observations on the behavior of the totient function with an arbitrary jth elementary symmetric polynomial. We then outline a method for solving a certain the restricted linear congruence problem with a greatest common divisor constraint on a quadratic form, illustrated by a concrete example. Most importantly, we demonstrate the equivalence between obtaining product forms for generalized totient functions, counting zeros of specific polynomials over finite fields, and resolving a broad class of restricted linear congruence problems .

math.nt
math math 11-26 00:00

On Some Generalisations of Gauss Sequences

arXiv:2511.19503v1 Announce Type: new Abstract: In this paper, we introduce integer sequences satisfying new congruence properties inspired by the Euler and Gauss congruences, which we call Euler-Gauss sequences. Noting that every Gauss sequence is an Euler-Gauss sequence, we compare them with certain generalizations of Gauss sequences and provide several counterexamples. In particular, the important Smallest Prime Factor (SPF) and Greatest Prime Factor (GPF) sequences (suitably defined at 1) are Euler-Gauss sequences but not Gauss sequences. We further extend these congruence-based integer sequences to a q-analog setting and establish characteristic properties that reveal their structure and fill gaps in the literature on q-Gauss sequences. In recent works, q-Gauss sequences have been shown to admit interesting combinatorial interpretations and to exhibit the Cyclic Sieving Phenomenon (CSP). Not only do our q-Euler-Gauss sequences satisfy the standard CSP with some restriction, but we also derive a new CSP condition for the SPF and GPF sequences, not hitherto known in the literature.

math.nt
math math 11-26 00:00

$p$-adic $L$-functions for $\mathrm U(2,1)\times\mathrm U(1,1)$

arXiv:2511.19552v1 Announce Type: new Abstract: We construct the five-variable $p$-adic $L$-function attached to Hida families on $\mathrm U(2,1)\times\mathrm U(1,1)$, interpolating the square-root of Rankin-Selberg $L$-values in the \emph{shifted piano} range. Our construction relies on a new theta operator and its $p$-adic variation which plays a role analogous to the classical Ramanujan-Serre theta operator in Hida's $p$-adic Rankin-Selberg method. The interpolation formula, including the modified Euler factors at $p$ and at the real place, is consistent with the conjectural shape of $p$-adic $L$-functions predicted by Coates and Perrin-Riou.

math.nt
math math 11-26 00:00

Local knots and the prime factorization of links

arXiv:2511.19579v1 Announce Type: new Abstract: The present note contains a new proof of Y. Hashizume's 1958 theorem that every non-split link in $S^3$ admits a unique factorization into prime links. While the new proof does not go far beyond standard techniques, it is considerably shorter than the original proof and avoids most of its case exhaustion. We apply this proof to obtain a string link version (and also an alternative proof) of a 1972 theorem of D. Rolfsen: two PL links in $S^3$ are ambient isotopic if and only if they are PL isotopic and their respective components are ambient isotopic. It is tempting to dismiss this string link version as obvious by deriving it directly either from Rolfsen's or Hashizume's theorem. But this does not seem to be possible, as it turns out that there exists a string link that has no local knots, while its closure has a local knot.

math.gt
math math 11-26 00:00

Extending Douglas-Rachford Splitting for Convex Optimization

arXiv:2511.19637v1 Announce Type: new Abstract: The Douglas-Rachford splitting method is a classical and widely used algorithm for solving monotone inclusions involving the sum of two maximally monotone operators. It was recently shown to be the unique frugal, no-lifting resolvent-splitting method that is unconditionally convergent in the general two-operator setting. In this work, we show that this uniqueness does not hold in the convex optimization case: when the operators are subdifferentials of proper, closed, convex functions, a strictly larger class of frugal, no-lifting resolvent-splitting methods is unconditionally convergent. We provide a complete characterization of all such methods in the convex optimization setting and prove that this characterization is sharp: unconditional convergence holds exactly on the identified parameter regions. These results immediately yield new families of convergent ADMM-type and Chambolle-Pock-type methods obtained through their Douglas-Rachford reformulations.

math.oc
math math 11-26 00:00

Stable components for gradient-like diffeomorphisms of torus inducing matrix $\begin{pmatrix} -1 & -1\cr 1& 0\end{pmatrix}$

arXiv:2511.19643v1 Announce Type: new Abstract: An isotopy between two diffeomorphisms means the existence of an arc connecting them in the space of diffeomorphisms. Among such arcs there are so-called stable arcs, which do not qualitatively change under small perturbations. In the present paper we consider a set of gradient-like diffeomorphisms f of 2-torus whose induced isomorphism given by a matrix $\begin{pmatrix} -1 & -1\cr 1& 0\end{pmatrix}$. We prove that the set of such diffeomorphisms is decomposed into four stable components. Moreover, we establish that two diffeomorphisms under consideration are stably connected if and only if they have the same number of fixed sinks.

math.ds
math math 11-26 00:00

Catalyzing System-level Decarbonization: An Analysis of Carbon Matching As An Accounting Framework

arXiv:2511.19666v1 Announce Type: new Abstract: Carbon matching aims to improve corporate carbon accounting by tracking emissions rather than energy consumption and production. We present a mathematical derivation of carbon matching using marginal emission rates, where the unit of matching is tons of carbon emitted. We present analysis and open source notebooks showing how marginal emissions can be calculated on simulated electric bus networks. Importantly, we prove mathematically that distinct emissions rates can be assigned to all aspects of the electric grid - including transmission, storage, generation, and consumption - completely allocating electric grid emissions. We show that carbon matching is an accurate carbon accounting framework that can inspire ambitious and impactful action. This research fills a gap by blending carbon accounting expertise and power systems modeling to consider the effectiveness of alternative methodologies for allocating electric system emissions.

math.oc
math math 11-26 00:00

Words with Repeated Letters in a Grid

arXiv:2511.19678v1 Announce Type: new Abstract: Given a word $w$, what is the maximum possible number of appearances of $w$ reading contiguously along any of the directions in $\{-1, 0, 1\}^d \setminus \{\mathbf{0}\}$ in a large $d$-dimensional grid (as in a word search)? Patchell and Spiro first posed a version of this question, which Alon and Kravitz completely answered for a large class of ``well-behaved" words, including those with no repeated letters. We study the general case, which exhibits greater variety and is often more complicated (even for $d=1$). We also discuss some connections to other problems in combinatorics, including the storied $n$-queens problem.

math.co
cs cs 11-26 00:00

PuzzlePoles: Cylindrical Fiducial Markers Based on the PuzzleBoard Pattern

arXiv:2511.19448v1 Announce Type: new Abstract: Reliable perception of the environment is a key enabler for autonomous systems, where calibration and localization tasks often rely on robust visual markers. We introduce the PuzzlePole, a new type of fiducial markers derived from the recently proposed PuzzleBoard calibration pattern. The PuzzlePole is a cylindrical marker, enabling reliable recognition and pose estimation from 360{\deg} viewing direction. By leveraging the unique combinatorial structure of the PuzzleBoard pattern, PuzzlePoles provide a high accuracy in localization and orientation while being robust to occlusions. The design offers flexibility for deployment in diverse autonomous systems scenarios, ranging from robot navigation and SLAM to tangible interfaces.

cs.cv
cs cs 11-26 00:00

AI-driven Predictive Shard Allocation for Scalable Next Generation Blockchains

arXiv:2511.19450v1 Announce Type: new Abstract: Sharding has emerged as a key technique to address blockchain scalability by partitioning the ledger into multiple shards that process transactions in parallel. Although this approach improves throughput, static or heuristic shard allocation often leads to workload skew, congestion, and excessive cross-shard communication diminishing the scalability benefits of sharding. To overcome these challenges, we propose the Predictive Shard Allocation Protocol (PSAP), a dynamic and intelligent allocation framework that proactively assigns accounts and transactions to shards based on workload forecasts. PSAP integrates a Temporal Workload Forecasting (TWF) model with a safety-constrained reinforcement learning (Safe-PPO) controller, jointly enabling multi-block-ahead prediction and adaptive shard reconfiguration. The protocol enforces deterministic inference across validators through a synchronized quantized runtime and a safety gate that limits stake concentration, migration gas, and utilization thresholds. By anticipating hotspot formation and executing bounded, atomic migrations, PSAP achieves stable load balance while preserving Byzantine safety. Experimental evaluation on heterogeneous datasets, including Ethereum, NEAR, and Hyperledger Fabric mapped via address-clustering heuristics, demonstrates up to 2x throughput improvement, 35\% lower latency, and 20\% reduced cross-shard overhead compared to existing dynamic sharding baselines. These results confirm that predictive, deterministic, and security-aware shard allocation is a promising direction for next-generation scalable blockchain systems.

cs.dc
econ econ 11-26 00:00

Dynamic Mechanism Collapse: A Boundary Characterization

arXiv:2511.19781v1 Announce Type: new Abstract: When are dynamics valuable? In Bayesian environments with public signals and no intertemporal commitment, we study a seller who allocates an economically single-shot resource over time. We provide necessary and sufficient conditions under which the optimal dynamic mechanism collapses to a simple terminal design: a single public experiment at date 0 followed by a posterior-dependent static mechanism executed at a deterministic date, with no further disclosure. The key condition is the existence of a global affine shadow value that supports the posterior-based revenue frontier and uniformly bounds all history-dependent revenues. When this condition fails, a collapse statistic pinpoints the dates and public state variables that generate genuine dynamic value. The characterization combines martingale concavification on the belief space with an affine-support duality for concave envelopes.

econ.th
econ econ 11-26 00:00

Dynamic Reward Design

arXiv:2511.19838v1 Announce Type: new Abstract: This paper studies a dynamic screening model in which a principal hires an agent with limited liability. The agent's private cost of working is an i.i.d. draw from a continuous distribution. His working status is publicly observable. The limited liability constraint requires that payments remain nonnegative at all times. In this setting, despite costs being i.i.d. and the payoffs being additively separable across periods, the optimal mechanism does not treat each period independently. Instead, it features backloading payments and requires the agent to work in consecutive periods. Specifically, I characterize conditions under which the optimal mechanism either grants the agent flexibility to start working in any period or restricts the starting period to the first. In either case, once the agent begins working, he is incentivized to work consecutively until the end.

econ.th
econ econ 11-26 00:00

Reserve System with Beneficiary-Share Guarantee

arXiv:2511.20077v1 Announce Type: new Abstract: We study allocation problems with reserve systems under minimum beneficiary-share guarantees, requirements that targeted matches constitute at least a specified percentage of total matches. While such mandates promote targeted matches, they inherently conflict with maximizing total matches. We characterize the complete non-domination frontier using minimal cycles, where each point represents an allocation that cannot increase targeted matches without sacrificing total matches. Our main results: (i) the frontier exhibits concave structure with monotonically decreasing slope, (ii) traversing from maximum targeted matches to maximum total matches reduces matches by at most half, (iii) the Repeated Hungarian Algorithm computes all frontier points in polynomial time, and (iv) mechanisms with beneficiary-share guarantees can respect category-dependent priority orderings but necessarily violate path-independence. These results enable rigorous evaluation of beneficiary-share policies across diverse allocation contexts.

econ.th
econ econ 11-26 00:00

Recursive contracts in non-convex environments

arXiv:2511.20303v1 Announce Type: new Abstract: In this paper we examine non-convex dynamic optimization problems with forward looking constraints. We prove that the recursive multiplier formulation in \cite{marcet2019recursive} gives the optimal value if one assumes that the planner has access to a public randomization device and forward looking constraints only have to hold in expectations. Whether one formulates the functional equation as a sup-inf problem or as an inf-sup problem is essential for the timing of the optimal lottery and for determining which constraints have to hold in expectations. We discuss for which economic problems the use of lotteries can be considered a reasonable assumption. We provide a general method to recover the optimal policy from a solution of the functional equation. As an application of our results, we consider the Ramsey problem of optimal government policy and give examples where lotteries are essential for the optimal solution.

econ.th
econ econ 11-26 00:00

Persuasion and Optimal Stopping

arXiv:2406.12278v3 Announce Type: replace Abstract: We study how a principal can jointly shape an agent's timing and action through information. We develop a revelation principle: with intertemporal commitment, the problem simplifies to choosing a joint distribution over stopping times and beliefs, delivering a tractable first-order approach, and an anti-revelation principle: without commitment, informative interim recommendations are necessary and sufficient to implement the optimal commitment outcome. We apply the method to analyze (i) moving the goalposts, where inching rather than teleporting the goalposts can be achieved without commitment; (ii) dynamic binary persuasion, where optimal policies combine suspense generation with action-targeted Poisson news; and (iii) dynamic linear persuasion with a continuum of states, where a tail-censorship policy with expanding disclosure intervals is optimal.

econ.th
econ econ 11-26 00:00

Sequential Network Design

arXiv:2409.14136v3 Announce Type: replace Abstract: We study dynamic network formation from a centralized perspective. In each period, the social planner builds a single link to connect previously unlinked pairs. The social planner is forward-looking, with instantaneous utility monotonic in the aggregate number of walks of various lengths. We show that, forming a nested split graph at each period is optimal, regardless of the discount function. When the social planner is sufficiently myopic, it is optimal to form a quasi-complete graph at each period, which is unique up to permutation. This finding provides a micro-foundation for the quasi-complete graph, as it is formed under a greedy policy. We also investigate the robustness of these findings under non-linear best response functions and weighted networks.

econ.th
econ econ 11-26 00:00

Selection Procedures in Competitive Admission

arXiv:2510.12653v2 Announce Type: replace Abstract: Two identical firms compete to attract and hire from a pool of candidates of unknown productivity. Firms simultaneously post a selection procedure which consists of a test and an acceptance probability for each test outcome. After observing the firms' selection procedures, each candidate can apply to one of them. Firms can vary both the accuracy (Lehmann, 1988) and difficulty (Hancart, 2024) of their test. The firms face two key considerations when choosing their selection procedure: the statistical properties of their test and the selection into the procedure by the candidates. I show that there is a unique symmetric equilibrium where the test is maximally accurate but minimally difficult. Intuitively, competition leads to maximal but misguided learning: firms end up having precise knowledge that is not payoff relevant. I also consider the cases where firms face capacity constraints, have the possibility of making a wage offer and the existence of asymmetric equilibria where one firm is more selective than another.

econ.th
econ econ 11-26 00:00

Heterogeneity in peer effects for binary outcomes

arXiv:2511.15891v3 Announce Type: replace Abstract: I introduce heterogeneity into the analysis of peer effects that arise from conformity, allowing the strength of the taste for conformity to vary across agents' actions. Using a structural model based on a simultaneous network game with incomplete information, I derive conditions for equilibrium uniqueness and for the identification of heterogeneous peer-effect parameters. I also propose specification tests to determine whether the conformity model or the spillover model is consistent with the observed data in the presence of heterogeneous peer effects. Applying the model to data on smoking and alcohol consumption among secondary school students, I show that assuming a homogeneous preference for conformity leads to biased estimates.

econ.em
astro-ph astro-ph 11-26 00:00

Simulated Rotation Measure Sky from Primordial Magnetic Fields

arXiv:2511.19508v1 Announce Type: new Abstract: Primordial Magnetic Fields (PMFs) -- magnetic fields originating in the early Universe and permeating the cosmological scales today -- can explain the observed microGauss-level magnetisation of galaxies and their clusters. In light of current and upcoming all-sky radio surveys, PMFs have drawn attention not only as major candidates for explaining the large-scale magnetisation of the Universe, but also as potential probes of early-Universe physics. In this paper, using cosmological simulations coupled with light-cone analysis, we study for the first time the imprints of the PMF structure on the mean rotation measure (RM) originating in the intergalactic medium (IGM), $\langle \mathrm{RM_{IGM}}\rangle$. We introduce a new method for producing full-sky $\mathrm{RM_{IGM}}$ distributions and analyse the autocorrelation of $\mathrm{RM_{IGM}}$ on small and large angular scales; we find that PMF structures indeed show distinct signatures. The large-scale uniform model (characterised by an initially unlimited coherence scale) leads to correlations up to 90 degrees, while correlations for small-scale stochastic PMF models drop by factor of $100$ at $ 0.17, 0.13$ and 0.11 degrees angular scales, corresponding to $5.24, 4.03$ and $3.52$ Mpc scales (at $z=2$ redshift) for magnetic fields with comoving $3.49, 1.81, 1.00 $ Mpc/h coherence scales, respectively; the correlation amplitude of the PMF model with comoving $\sim 19$ Mpc/h coherence scale drops only by factor of $10$ at 1 degree (30.6 Mpc). These results suggests that improvements in the modelling of Galactic RM will be necessary to investigate the signature of large-scale correlated PMFs. A comparison of $\langle \mathrm{RM_{IGM}}\rangle$ redshift dependence obtained from our simulations with that from the LOFAR Two-metre Sky Survey shows agreement with our previous upper limits' estimates on the PMF strength derived from RM-rms analysis.

astro-ph.co
astro-ph astro-ph 11-26 00:00

Evolutionary Processes in the Centaur Region

arXiv:2511.19554v1 Announce Type: new Abstract: Centaurs populate relatively short-lived and rapidly evolving orbits in the giant-planet region and are believed to be one of the solar system's most complex and diverse populations. Most Centaurs are linked to origins in the dynamically excited component of the trans-Neptunian region, and are often considered an intermediate phase in the evolution of Jupiter-family comets (JFCs). Additionally, the Centaur region hosts objects from varied source populations and having different dynamical histories. In this chapter, we focus on the physical processes responsible for the evolution of this heterogeneous population in the giant-planet region. The chapter begins with a brief review on the origin and early evolution that determine Centaurs' properties prior to entering the giant-planet region. Next, we discuss the thermal, collisional, and tidal processes believed to drive the changes Centaurs undergo. We provide a comprehensive review of the evidence for evolutionary changes derived from studies of the activity, physical properties, and surface characteristics of Centaurs and related populations, such as trans-Neptunian objects, JFCs, and Trojans. This chapter reveals a multitude of gaps in the current understanding of the evolution mechanisms acting in the giant-planet region. In light of these open questions, we conclude with an outlook on future telescope and spacecraft observations, detailing how they are expected to elucidate Centaur evolution processes.

astro-ph.ep
astro-ph astro-ph 11-26 00:00

Estimating the masses of Narrow line Seyfert 1 galaxies using damped random walk method

arXiv:2511.19587v1 Announce Type: new Abstract: Narrow-line Seyfert 1 galaxies (NLSy1s) are a subclass of active galactic nuclei (AGNs), commonly associated with rapidly accreting, relatively low-mass black holes ($10^6$ - $10^8 M_\odot$) hosted in spiral galaxies. Although typically considered to have high Eddington ratios, recent observations, particularly of $\gamma$-ray-emitting NLSy1s, have raised questions about their true black hole masses, with some estimates approaching those of Broad-line Seyfert 1 (BLSy1) systems. In this work, we present the recalibrated mass estimations for a large sample of NLSy1s galaxies with z $<0.8$. We apply the damped random walk (DRW) formalism to a comparison set of 1,141 NLSy1 and 1,143 BLSy1 galaxies, matched in redshift and bolometric luminosity using SDSS DR17 spectroscopy. Our analysis employs a multivariate calibration that incorporates both the Eddington ratio and the rest-frame wavelength to refine the mass estimates. We obtain median DRW-based black hole masses of $\text{log}(M_{\text{BH}}^{\text{DRW}}/M_\odot) = 6.25 \pm 0.65$ for NLSy1s and $7.07 \pm 0.67$ for BLSy1s, in agreement with their respective virial mass distributions. Furthermore, we identify strong inverse trends between the variability amplitude and both optical luminosity and FeII emission strength, consistent with a scenario where higher accretion rates suppress long-term optical variability. These findings reinforce the view that NLSy1s harbor smaller black holes and highlight the value of variability-based approaches in tracing AGN accretion properties.

astro-ph.ga
astro-ph astro-ph 11-26 00:00

X-ray, optical, and radio follow-up of five thermally emitting isolated neutron star candidates

arXiv:2511.19591v1 Announce Type: new Abstract: We report on follow-up observations with XMM-Newton, the FORS2 instrument at the ESO-VLT, and FAST, aiming to characterise the nature of five thermally emitting isolated neutron star (INS) candidates recently discovered from searches in the footprint of the Spectrum Roentgen Gamma (SRG)/eROSITA All-sky Survey. We find that the X-ray spectra are predominantly thermal and can be described by low-absorbed blackbody models with effective temperatures ranging from 50 to 210 eV. In two sources, the spectra also show narrow absorption features at $300 - 400$ eV. Additional non-thermal emission components are not detected in any of the five candidates. The soft X-ray emission, the absence of optical counterparts in four sources, and the consequent large X-ray-to-optical flux ratios $>3000 - 5400$ confirm their INS nature. For the remaining source, eRASSU J144516.0-374428, the available data do not allow a confident exclusion of an active galactic nucleus nature. However, if the source is Galactic, the small inferred X-ray emitting region is reminiscent of a heated pulsar polar cap, possibly pointing to a binary pulsar nature. X-ray timing searches do not detect significant modulations in all candidates, implying pulsed fraction upper limits of 13 - 19% ($0.001-13.5$ Hz). The absence of pulsations in the FAST observations targeting eRASSU J081952.1-131930 and eRASSU J084046.2-115222 excludes periodic magnetospheric emission at 1 - 1.5 GHz with an $8\sigma$ significance down to 4.08 $\mu$Jy and 2.72 $\mu$Jy, respectively. The long-term X-ray emission of all sources does not imply significant variability. Additional observations are warranted to establish exact neutron star types. At the same time, the confirmation of the predominantly thermal neutron star nature in four additional sources highlights the power of SRG/eROSITA to complement the Galactic INS population.

astro-ph.he
astro-ph astro-ph 11-26 00:00

Pixellated Posterior Sampling of Point Spread Functions in Astronomical Images

arXiv:2511.19594v1 Announce Type: new Abstract: We introduce a novel framework for upsampled Point Spread Function (PSF) modeling using pixel-level Bayesian inference. Accurate PSF characterization is critical for precision measurements in many fields including: weak lensing, astrometry, and photometry. Our method defines the posterior distribution of the pixelized PSF model through the combination of an analytic Gaussian likelihood and a highly expressive generative diffusion model prior, trained on a library of HST ePSF templates. Compared to traditional methods (parametric Moffat, ePSF template-based, and regularized likelihood), we demonstrate that our PSF models achieve orders of magnitude higher likelihood and residuals consistent with noise, all while remaining visually realistic. Further, the method applies even for faint and heavily masked point sources, merely producing a broader posterior. By recovering a realistic, pixel-level posterior distribution, our technique enables the first meaningful propagation of detailed PSF morphological uncertainty in downstream analysis. An implementation of our posterior sampling procedure is available on GitHub.

astro-ph.im
astro-ph astro-ph 11-26 00:00

Role of magnetic reconnection in blazar variability using numerical simulation

arXiv:2511.19605v1 Announce Type: new Abstract: Fast $\gamma$-ray variability in blazars remains a central puzzle in high-energy astrophysics, challenging standard shock acceleration models. Blazars, a subclass of active galactic nuclei (AGN) with jets pointed close to our line of sight, offer a unique view into jet dynamics. Blazar $\gamma$-ray light curves exhibit rapid, high-amplitude flares that point to promising alternative dissipation mechanisms such as magnetic reconnection. This study uses three-dimensional relativistic magnetohydrodynamic (RMHD) and resistive relativistic magnetohydrodynamic (ResRMHD) simulations with the PLUTO code to explore magnetic reconnection in turbulent, magnetized plasma columns. Focusing on current-driven kink instabilities, we identify the formation of current sheets due to magnetic reconnection, leading to plasmoid formation. We develop a novel technique combining hierarchical structure analysis and reconnection diagnostics to identify reconnecting current sheets. A statistical analysis of their geometry and orientation reveals a smaller subset that aligns closely with the jet axis, consistent with the jet-in-jet model. These structures can generate relativistically moving plasmoids with significant Doppler boosting, offering a plausible mechanism for the fast flares superimposed on slowly varying blazar light curves. These findings provide new insights into the plasma dynamics of relativistic jets and strengthen the case for magnetic reconnection as a key mechanism in blazar $\gamma$-ray variability.

astro-ph.he
astro-ph astro-ph 11-26 00:00

Connecting clustering and the cosmic web:Observational constraints on secondary halo bias

arXiv:2511.19607v1 Announce Type: new Abstract: Cosmological simulations predict significant secondary dependencies of halo clustering on internal properties and environment. Detecting these subtle signals in observational data remains challenging, with important ramifications for galaxy evolution and cosmology. We probe secondary halo bias in observational survey data, using galaxy groups as dark matter halo proxies. We quantify secondary bias using central galaxy colour and environmental diagnostics. We use an extended, refined galaxy group catalogue from the Sloan Digital Sky Survey. Secondary bias is defined as any deviation in group clustering strength at fixed mass, quantified through the projected two-point correlation function. Our environmental analysis uses DisPerSE to compute distances to critical points of the density field, incorporating local group overdensity measurements on multiple scales. We robustly detect several forms of secondary bias in the clustering of galaxy groups. At fixed mass, groups hosting red central galaxies are more strongly clustered than those with blue centrals, with $b_{\rm relative}$ ranging from $\sim 1.2$ for the 15\% reddest centrals to $\sim 0.8$ for the bluest ones. Environmental dependencies based on cosmic-web distances are also present, though significantly weaker and largely mass-independent. The strongest signal arises from local overdensity: groups in the densest 15\% of environments reach $b_{\rm relative} \sim 1.4$, while those in the least dense regions fall to $b_{\rm relative} \sim 0.7$. These results establish a clear observational hierarchy for secondary halo bias. The colour of central galaxies correlates with the local group overdensity, which, in turn, correlates with the bias at fixed group mass. Assuming that central galaxy colour traces halo assembly history, this three-stage picture offers a conceptual link between our results and halo assembly bias.

astro-ph.co
astro-ph astro-ph 11-26 00:00

Metal enrichment of galaxies in a massive node of the Cosmic Web at $z \sim 3$

arXiv:2511.19608v1 Announce Type: new Abstract: We present the mass-metallicity relation for star-forming galaxies in the MUSE Quasar Nebula 01 (MQN01) field, a massive cosmic web node at $z \sim 3.245$, hosting one of the largest overdensities of galaxies and AGNs found so far at $z > 3$. Through James Webb Space Telescope (JWST) Near Infrared Spectrograph (NIRSpec) spectra and images from JWST and Hubble Space Telescope (HST), we identify a sample of 9 star-forming galaxies in the MQN01 field with detection of nebular emission lines ($\rm H\beta$, [OIII], $\rm H\alpha$, [NII]), covering the mass range of $\rm 10^{7.5}M_\odot - 10^{10.5}M_\odot$. We present the relations of the emission-line flux ratios versus stellar mass for the sample and derive the gas-phase metallicity based on the strong line diagnostics of [OIII]$\lambda5008$/$\rm H\beta$ and [NII]$\lambda6585$/$\rm H\alpha$. Compared to the typical, field galaxies at similar redshifts, MQN01 galaxies show relatively higher [NII]$\lambda6585$/$\rm H\alpha$ and lower [OIII]$\lambda5008$/$\rm H\beta$ at the same stellar mass, which implies a higher metallicity by about $0.25\pm 0.07$ dex with respect to the field mass-metallicity relation. These differences are decreased considering the ``Fundamental Metallicity Relation'', i.e. if the galaxies' Star Formation Rates (SFR) are also taken into account. We argue that these results are consistent with a scenario in which galaxies in overdense regions assemble their stellar mass more efficiently (or, equivalently, start forming at earlier epochs) compared to field galaxies at similar redshifts.

astro-ph.ga
astro-ph astro-ph 11-26 00:00

Beyond the Monsters: A More Complete Census of Black Hole Activity at Cosmic Dawn

arXiv:2511.19609v1 Announce Type: new Abstract: JWST has revealed an abundance of low-luminosity active galactic nuclei (AGN) at high redshifts ($z > 3$), pushing the limits of black hole (BH) science in the early Universe. Results have claimed that these BHs are significantly more massive than expected from the BH mass-host galaxy stellar mass relation derived from the local Universe. We present a comprehensive census of the BH populations in the early Universe through a detailed stacking analysis of galaxy populations, binned by luminosity and redshift, using JWST spectroscopy from the CEERS, JADES, RUBIES, and GLASS extragalactic deep field surveys. Broad H$\alpha$ detections in $31\%$ of the stacked spectra (5/16 bins) imply median BH masses of $10^{5.21} - 10^{6.13}~ \rm{M_{\odot}}$ and the stacked SEDs of these bins indicate median stellar masses of $10^{7.84} - 10^{8.56} ~\rm{M_{\odot}}$. This suggests that the median galaxy hosts a BH that is at most a factor of 10 times over-massive compared to its host galaxy and lies closer to the locally derived $M_{BH}-M_*$ relation. We investigate the seeding properties of the inferred BHs and find that they can be well-explained by a light stellar remnant seed undergoing moderate Eddington accretion. Our results indicate that individual detections of AGN are more likely to sample the upper envelope of the $M_{BH}-M_*$ distribution, while stacking on ``normal" galaxies and searching for AGN signatures can overcome the selection bias of individual detections.

astro-ph.ga
astro-ph astro-ph 11-26 00:00

Metal-loaded outflows in sub-Milky Way galaxies in the CIELO simulations

arXiv:2511.19630v1 Announce Type: new Abstract: Supernova (SN) feedback-driven galactic outflows are a key physical process that contributes to the baryon cycle by regulating the star formation activity, reducing the amount of metals in low-mass galaxies and enriching the circumgalactic (CGM) and intergalactic media (IGM). We aim to understand the chemical loop of sub-Milky Way (MW) galaxies and their nearby regions. We studied 15 simulated central sub-MW galaxies (M* <= 10^10 Msun) and intermediate-mass galaxies (M* \sim 10^10 Msun) from the CIELO-P7 high-resolution simulations. We followed the evolution of the progenitor galaxies, their properties and the characteristics of the outflows within the redshift range z = [0, 7]. We used two dynamically-motivated outflow definitions, unbound outflows and expelled mass rates, to quantify the impact of SN feedback. At z \sim 0, sub-MW galaxies have a larger fraction of their current oxygen mass in the gas phase but have expelled a greater portion beyond the virial radius, compared to their higher-mass counterparts. Galaxies with M* <\sim 10^9 Msun have 10-40 per cent of their total oxygen mass within R200 in the CGM, and an equivalent to 10-60 per cent expelled into the IGM. In contrast, more massive galaxies have most of the oxygen mass locked by the stellar populations. The CGM of low-mass galaxies predominantly contains oxygen low-temperature gas, acting as a metal reservoir. We find that the outflows are more oxygen-rich for sub-MW galaxies, Zout/ZISM \sim 1.5, than for higher-mass galaxies, Zout/ZISM <= 0.5, particularly for z < 2. Mass-loading factors of eta_out \sim 0 - 6 are detected in agreement with observations (abridged).

astro-ph.ga
astro-ph astro-ph 11-26 00:00

Disc growth and vertical heating of lenticular galaxies in the Fornax cluster

arXiv:2511.19632v1 Announce Type: new Abstract: We present a detailed analysis of the vertical and radial structure of mono-age stellar populations in three edge-on lenticular galaxies (FCC 153, FCC 170, and FCC 177) in the Fornax cluster, using deep MUSE observations. By measuring the half-mass radius (R$_{50}$) and half-mass height (z$_{50}$) across 1 Gyr-wide age bins, we trace the spatial evolution of stellar populations over cosmic time. All galaxies exhibit a remarkably constant disc thickness for all stars younger than ~6 Gyr, suggesting minimal secular heating and limited impact from environmental processes such as tidal shocking or harassment. Evidence of past mergers (8-10 Gyr ago) is found in the increase of z$_{50}$ for older populations. We find that accreted (metal-poor) stars have been deposited in quite thick configurations, but that the interactions only moderately thickened pre-existing stars in the galaxies, and only caused mild flaring in the outer regions of the discs. The radial structure of the discs varies across galaxies, but in all cases we find that the radial extent of mono-age populations remains constant or grows over the past 8 Gyr. This leads us to argue that within the radial range we consider, strangulation, rather than ram-pressure stripping, is the dominant quenching mechanism in those galaxies. Our results highlight the usefulness of analysing the structure of mono-age population to uncover the mechanisms driving galaxy evolution, and we anticipate broader insights from the GECKOS survey, studying 36 nearby edge-on disc galaxies.

astro-ph.ga
AI速览助手