今日速览 · AI 导读

24 小时跨学科精选

自动抓取 arXiv 多学科 RSS,DeepSeek 自动润色标题与摘要,最快 24 小时内汇总。

AI 导读

今日看点(自动摘要):cs: How to Expand a Self-orthogonal Code;cs: Covert Communication and Key Generation Over Quantum State-Dependent Channels;cs: Causal Intervention Sequence Analysis for Fault Tracking in Radio Access Networks

速览说明

数据源:arXiv 官方 RSS(physics / math / cs / q-bio / econ / astro-ph 等)。

标题与摘要由 DeepSeek 进行中文润色,便于快速浏览;外链跳转至原文。

AI 速览助手:点击卡片“速览全文”可拉取 arXiv HTML,生成全文要点并缓存;右下角悬浮按钮可随时展开/收起速览。

自动抓取:每日 14:00 初始化批次并每 15 分钟轮询学科,24h 无数据则回退 72h/7 天;arXiv 周末停更时自动跳过。

往期回顾:点击上方“往期回顾”进入日历页,按月查看每日抓取总数,并跳转到对应日期速览。

2025-11-25 速览 · 计算机科学

2025-11-25 共 24 条抓取,按综合热度排序

← 返回日历
cs cs 11-25 00:00

How to Expand a Self-orthogonal Code

arXiv:2511.17503v1 Announce Type: new Abstract: In this paper, we show how to expand Euclidean/Hermitian self-orthogonal code preserving their orthogonal property. Our results show that every $k$-dimension Hermitian self-orthogonal code is contained in a $(k+1)$-dimensional Hermitian self-orthogonal code. Also, for $k< n/2-1$, every $[n,k]$ Euclidean self-orthogonal code is contained in an $[n,k+1]$ Euclidean self-orthogonal code. Moreover, for $k=n/2-1$ and $p=2$, we can also fulfill the expanding process. But for $k=n/2-1$ and $p$ odd prime, the expanding process can be fulfilled if and only if an extra condition must be satisfied. We also propose two feasible algorithms on these expanding procedures.

cs.itmath.comath.it
cs cs 11-25 00:00

Covert Communication and Key Generation Over Quantum State-Dependent Channels

arXiv:2511.17504v1 Announce Type: new Abstract: We study covert communication and covert secret key generation with positive rates over quantum state-dependent channels. Specifically, we consider fully quantum state-dependent channels when the transmitter shares an entangled state with the channel. We study this problem setting under two security metrics. For the first security metric, the transmitter aims to communicate covertly with the receiver while simultaneously generating a covert secret key, and for the second security metric, the transmitter aims to transmit a secure message covertly and generate a covert secret key with the receiver simultaneously. Our main results include one-shot and asymptotic achievable positive covert-secret key rate pairs for both security metrics. Our results recover as a special case the best-known results for covert communication over state-dependent classical channels. To the best of our knowledge, our results are the first instance of achieving a positive rate for covert secret key generation and the first instance of achieving a positive covert rate over a quantum channel. Additionally, we show that our results are optimal when the channel is classical and the state is available non-causally at both the transmitter and the receiver.

cs.itmath.it
cs cs 11-25 00:00

Causal Intervention Sequence Analysis for Fault Tracking in Radio Access Networks

arXiv:2511.17505v1 Announce Type: new Abstract: To keep modern Radio Access Networks (RAN) running smoothly, operators need to spot the real-world triggers behind Service-Level Agreement (SLA) breaches well before customers feel them. We introduce an AI/ML pipeline that does two things most tools miss: (1) finds the likely root-cause indicators and (2) reveals the exact order in which those events unfold. We start by labeling network data: records linked to past SLA breaches are marked `abnormal', and everything else `normal'. Our model then learns the causal chain that turns normal behavior into a fault. In Monte Carlo tests the approach pinpoints the correct trigger sequence with high precision and scales to millions of data points without loss of speed. These results show that high-resolution, causally ordered insights can move fault management from reactive troubleshooting to proactive prevention.

cs.nics.lg
cs cs 11-25 00:00

AURA: Adaptive Unified Reasoning and Automation with LLM-Guided MARL for NextG Cellular Networks

arXiv:2511.17506v1 Announce Type: new Abstract: Next-generation (NextG) cellular networks are expected to manage dynamic traffic while sustaining high performance. Large language models (LLMs) provide strategic reasoning for 6G planning, but their computational cost and latency limit real-time use. Multi-agent reinforcement learning (MARL) supports localized adaptation, yet coordination at scale remains challenging. We present AURA, a framework that integrates cloud-based LLMs for high-level planning with base stations modeled as MARL agents for local decision-making. The LLM generates objectives and subgoals from its understanding of the environment and reasoning capabilities, while agents at base stations execute these objectives autonomously, guided by a trust mechanism that balances local learning with external input. To reduce latency, AURA employs batched communication so that agents update the LLM's view of the environment and receive improved feedback. In a simulated 6G scenario, AURA improves resilience, reducing dropped handoff requests by more than half under normal and high traffic and lowering system failures. Agents use LLM input in fewer than 60\% of cases, showing that guidance augments rather than replaces local adaptability, thereby mitigating latency and hallucination risks. These results highlight the promise of combining LLM reasoning with MARL adaptability for scalable, real-time NextG network management.

cs.nics.ai
cs cs 11-25 00:00

The use of artificial intelligence in music creation: between interface and appropriation

arXiv:2511.17507v1 Announce Type: new Abstract: By observing the activities and relationships of musicians and sound designers to the activities of creation, performance, publishing and dissemination with artificial intelligence (AI), from two specialized forums between 2022 and 2024, this article proposes a lexicometric analysis of the representations linked to their use. Indeed, the machine, now equipped with artificial intelligences requiring new appropriations and enabling new mediations, constitutes new challenges for artists. To study these confrontations and new mediations, our approach mobilizes the theoretical framework of the Human-AI Musicking Framework, based on a lexicometric analysis of content. The aim is to clarify the present and future uses of AI from the interfaces, in the creation of sound and musical content, and to identify the obstacles, obstacles, brakes and limits to appropriation ``in the fact of making the content one's own and integrating it as a part of oneself'' (Bachimont and Crozat, 2004) in the context of a collaboration between musician and machine.

cs.hccs.ai
cs cs 11-25 00:00

Deep Learning-based Lightweight RGB Object Tracking for Augmented Reality Devices

arXiv:2511.17508v1 Announce Type: new Abstract: Augmented Reality (AR) applications often require robust real-time tracking of objects in the user's environment to correctly overlay virtual content. Recent advances in computer vision have produced highly accurate deep learning-based object trackers, but these models are typically too heavy in computation and memory for wearable AR devices. In this paper, we present a lightweight RGB object tracking algorithm designed specifically for resource-constrained AR platforms. The proposed tracker employs a compact Siamese neural network architecture and incorporates optimization techniques such as model pruning, quantization, and knowledge distillation to drastically reduce model size and inference cost while maintaining high tracking accuracy. We train the tracker offline on large video datasets using deep convolutional neural networks and then deploy it on-device for real-time tracking. Experimental results on standard tracking benchmarks show that our approach achieves comparable accuracy to state-of-the-art trackers, yet runs in real-time on a mobile AR headset at around 30 FPS -- more than an order of magnitude faster than prior high-performance trackers on the same hardware. This work enables practical, robust object tracking for AR use-cases, opening the door to more interactive and dynamic AR experiences on lightweight devices.

cs.hccs.cv
cs cs 11-25 00:00

Beyond Awareness: Investigating How AI and Psychological Factors Shape Human Self-Confidence Calibration

arXiv:2511.17509v1 Announce Type: new Abstract: Human-AI collaboration outcomes depend strongly on human self-confidence calibration, which drives reliance or resistance toward AI's suggestions. This work presents two studies examining whether calibration of self-confidence before decision tasks, low versus high levels of Need for Cognition (NFC), and Actively Open-Minded Thinking (AOT), leads to differences in decision accuracy, self-confidence appropriateness during the tasks, and metacognitive perceptions (global and affective). The first study presents strategies to identify well-calibrated users, also comparing decision accuracy and the appropriateness of self-confidence across NFC and AOT levels. The second study investigates the effects of calibrated self-confidence in AI-assisted decision-making (no AI, two-stage AI, and personalized AI), also considering different NFC and AOT levels. Our results show the importance of human self-confidence calibration and psychological traits when designing AI-assisted decision systems. We further propose design recommendations to address the challenge of calibrating self-confidence and supporting tailored, user-centric AI that accounts for individual traits.

cs.hccs.ai
cs cs 11-25 00:00

A Multidisciplinary Design and Optimization (MDO) Agent Driven by Large Language Models

arXiv:2511.17511v1 Announce Type: new Abstract: To accelerate mechanical design and enhance design quality and innovation, we present a Multidisciplinary Design and Optimization (MDO) Agent driven by Large Language Models (LLMs). The agent semi-automates the end-to-end workflow by orchestrating three core capabilities: (i) natural-language-driven parametric modeling, (ii) retrieval-augmented generation (RAG) for knowledge-grounded conceptualization, and (iii) intelligent orchestration of engineering software for performance verification and optimization. Working in tandem, these capabilities interpret high-level, unstructured intent, translate it into structured design representations, automatically construct parametric 3D CAD models, generate reliable concept variants using external knowledge bases, and conduct evaluation with iterative optimization via tool calls such as finite-element analysis (FEA). Validation on three representative cases - a gas-turbine blade, a machine-tool column, and a fractal heat sink - shows that the agent completes the pipeline from natural-language intent to verified and optimized designs with reduced manual scripting and setup effort, while promoting innovative design exploration. This work points to a practical path toward human-AI collaborative mechanical engineering and lays a foundation for more dependable, vertically customized MDO systems.

cs.hccs.ai
cs cs 11-25 00:00

First Contact with Dark Patterns and Deceptive Designs in Chinese and Japanese Free-to-Play Mobile Games

arXiv:2511.17512v1 Announce Type: new Abstract: Mobile games have gained immense popularity due to their accessibility, allowing people to play anywhere, anytime. Dark patterns and deceptive designs (DPs) have been found in these and other gaming platforms within certain cultural contexts. Here, we explored DPs in the onboarding experiences of free-to-play mobile games from China and Japan. We identified several unique patterns and mapped their relative prevalence. We also found that game developers often employ combinations of DPs as a strategy ("DP Combos") and use elements that, while not inherently manipulative, can enhance the impact of known patterns ("DP Enhancers"). Guided by these findings, we then developed an enriched ontology for categorizing deceptive game design patterns into classes and subclasses. This research contributes to understanding deceptive game design patterns and offers insights for future studies on cultural dimensions and ethical game design in general.

cs.hccs.cy
cs cs 11-25 00:00

Motivational Climate Effects on Communications, Emotional-Social States, and Performance in Collaborative Gaming Environment

arXiv:2511.17513v1 Announce Type: new Abstract: The study explores the effects of motivational climate on communication features, emotional states, collective efficacy, and performance in collaborative gaming environments. Forty participants with no prior gaming experience were randomly assigned to 20 gender-matched teams of three (including one confederate) across two motivational climates: positive-supportive (PS) or neutral-unsupported (NU) (10 teams per condition). Team members completed three progressively difficult levels of Overcooked! 2 during which communication contents, emotional responses, collective efficacy, and performance outcomes were observed and coded. Mixed-design MANOVAs and ANOVAs were employed to examine the effects of motivational climate and task difficulty on communication patterns, emotions, collective efficacy, and performance. Chi-square analyses were performed to test communication content differences between conditions. Results revealed that PS team members significantly outperformed NU teams at lower task difficulty level, but this advantage diminished as task complexity increased. Communication analysis revealed that PS team members utilized significantly more action-oriented, factual, and emotional/motivational statements, while NU team members used more statements of uncertainty and non-task-related communication. The percentage of the talk time increased with difficulty across both climate conditions. PS team members maintained more positive emotional profiles throughout, with higher excitement and happiness scores and lower anxiety, dejection, and anger compared to NU team members. Furthermore, PS team members reported consistently higher collective efficacy beliefs across all difficulty levels. These findings reveal that positive motivational climate enhances team communication effectiveness, emotional resilience, and performance outcomes in challenging collaborative environments.

cs.hccs.si
cs cs 11-25 00:00

XAI-on-RAN: Explainable, AI-native, and GPU-Accelerated RAN Towards 6G

arXiv:2511.17514v1 Announce Type: new Abstract: Artificial intelligence (AI)-native radio access networks (RANs) will serve vertical industries with stringent requirements: smart grids, autonomous vehicles, remote healthcare, industrial automation, etc. To achieve these requirements, modern 5G/6G design increasingly leverage AI for network optimization, but the opacity of AI decisions poses risks in mission-critical domains. These use cases are often delivered via non-public networks (NPNs) or dedicated network slices, where reliability and safety are vital. In this paper, we motivate the need for transparent and trustworthy AI in high-stakes communications (e.g., healthcare, industrial automation, and robotics) by drawing on 3rd generation partnership project (3GPP)'s vision for non-public networks. We design a mathematical framework to model the trade-offs between transparency (explanation fidelity and fairness), latency, and graphics processing unit (GPU) utilization in deploying explainable AI (XAI) models. Empirical evaluations demonstrate that our proposed hybrid XAI model xAI-Native, consistently surpasses conventional baseline models in performance.

cs.nics.ai
cs cs 11-25 00:00

Embedding Generative AI into Systems Analysis and Design Curriculum: Framework, Case Study, and Cross-Campus Empirical Evidence

arXiv:2511.17515v1 Announce Type: new Abstract: Systems analysis students increasingly use Generative AI, yet current pedagogy lacks systematic approaches for teaching responsible AI orchestration that fosters critical thinking whilst meeting educational outcomes. Students risk accepting AI suggestions blindly or uncritically without assessing alignment with user needs or contextual appropriateness. SAGE (Structured AI-Guided Education) addresses this gap by embedding GenAI into curriculum design, training students when to accept, modify, or reject AI contributions. Implementation with 18 student groups across four Australian universities revealed how orchestration skills develop. Most groups (84\%) moved beyond passive acceptance, showing selective judgment, yet none proactively identified gaps overlooked by both human and AI analysis, indicating a competency ceiling. Students strong at explaining decisions also performed well at integrating sources, and those with deep domain understanding consistently considered accessibility considerations. Accessibility awareness proved fragile. When writing requirements, 85\% of groups explicitly considered elderly users and cultural needs. Notably, 55\% of groups struggled identifying when AI misclassified system boundaries (what belongs inside versus outside the system), 45\% missed data management errors (how information is stored and updated), and 55\% overlooked missing exception handling. Three implications emerge for educators: (i) require students to document why they accepted, modified, or rejected each AI suggestion, making reasoning explicit; (ii) embed accessibility prompts at each development stage because awareness collapses without continuous scaffolding; and (iii) have students create their own specifications before using AI, then compare versions, and anchor to research or standards to identify gaps.

cs.hccs.ai
cs cs 11-25 00:00

SAJD: Self-Adaptive Jamming Attack Detection in AI/ML Integrated 5G O-RAN Networks

arXiv:2511.17519v1 Announce Type: new Abstract: The open radio access network (O-RAN) enables modular, intelligent, and programmable 5G network architectures through the adoption of software-defined networking (SDN), network function virtualization (NFV), and implementation of standardized open interfaces. It also facilitates closed loop control and (non/near) real-time optimization of radio access network (RAN) through the integration of non-real-time applications (rApps) and near-real-time applications (xApps). However, one of the security concerns for O-RAN that can severely undermine network performance and subject it to a prominent threat to the security & reliability of O-RAN networks is jamming attacks. To address this, we introduce SAJD-a self-adaptive jammer detection framework that autonomously detects jamming attacks in artificial intelligence (AI) / machine learning (ML)-integrated O-RAN environments. The SAJD framework forms a closed-loop system that includes near-real-time inference of radio signal jamming interference via our developed ML-based xApp, as well as continuous monitoring and retraining pipelines through rApps. Specifically, a labeler rApp is developed that uses live telemetry (i.e., KPIs) to detect model drift, triggers unsupervised data labeling, executes model training/retraining using the integrated & open-source ClearML framework, and updates deployed models on the fly, without service disruption. Experiments on O-RAN-compliant testbed demonstrate that the SAJD framework outperforms state-of-the-art (offline-trained with manual labels) jamming detection approach in accuracy and adaptability under various dynamic and previously unseen interference scenarios.

cs.nics.ai
cs cs 11-25 00:00

Safe Farming: Development of a Prevention System to Mitigate Vertebrates Crop Raiding

arXiv:2511.17520v1 Announce Type: new Abstract: One of the main problems for farmers is the protection of their crops, before and after harvesting, from animals and birds. To overcome this problem, this paper proposes a model of safe farming in which the crops will be protected from vertebrates attack through a prevention system that is based on Wirelesses Sensors Networks. Different sensor nodes are placed around the field that detect animals or birds existence and generate required signals and information. This information is passed to the Repelling and Notifying System (RNS) that is installed at the field through a short range wireless technology, ZigBee. As RNS receives the information, it generates ultrasonic sounds that are unbearable for animals and birds, which causes them to run away from the field. These ultrasonic sounds are generated in a frequency range that only animals and birds can hear, while humans cannot notice the sound. The paper also proposes a notifying system. It will inform the farmer about animals or birds intrusion in the field through SMS, but doesn't need any action from the farmer. The low cost and power efficiency of the proposed system is a key advantage for developing countries where cost and power are major players in any system feasibility.

cs.nics.etcs.ai
cs cs 11-25 00:00

DyPBP: Dynamic Peer Beneficialness Prediction for Cryptocurrency P2P Networking

arXiv:2511.17523v1 Announce Type: new Abstract: Distributed peer-to-peer (P2P) networking delivers the new blocks and transactions and is critical for the cryptocurrency blockchain system operations. Having poor P2P connectivity reduces the financial rewards from the mining consensus protocol. Previous research defines beneficalness of each Bitcoin peer connection and estimates the beneficialness based on the observations of the blocks and transactions delivery, which are after they are delivered. However, due to the infrequent block arrivals and the sporadic and unstable peer connections, the peers do not stay connected long enough to have the beneficialness score to converge to its expected beneficialness. We design and build Dynamic Peer Beneficialness Prediction (DyPBP) which predicts a peer's beneficialness by using networking behavior observations beyond just the block and transaction arrivals. DyPBP advances the previous research by estimating the beneficialness of a peer connection before it delivers new blocks and transactions. To achieve such goal, DyPBP introduces a new feature for remembrance to address the dynamic connectivity issue, as Bitcoin's peers using distributed networking often disconnect and re-connect. We implement DyPBP on an active Bitcoin node connected to the Mainnet and use machine learning for the beneficialness prediction. Our experimental results validate and evaluate the effectiveness of DyPBP; for example, the error performance improves by 2 to 13 orders of magnitude depending on the machine-learning model selection. DyPBP's use of the remembrance feature also informs our model selection. DyPBP enables the P2P connection's beneficialness estimation from the connection start before a new block arrives.

cs.nics.lg
cs cs 11-25 00:00

RadioMapMotion: A Dataset and Baseline for Proactive Spatio-Temporal Radio Environment Prediction

arXiv:2511.17526v1 Announce Type: new Abstract: Radio maps (RMs), which provide location-based pathloss estimations, are fundamental to enabling proactive, environment-aware communication in 6G networks. However, existing deep learning-based methods for RM construction often model dynamic environments as a series of independent static snapshots, thereby omitting the temporal continuity inherent in signal propagation changes caused by the motion of dynamic entities. To address this limitation, we propose the task of spatio-temporal RM prediction, which involves forecasting a sequence of future maps from historical observations. A key barrier to this predictive approach has been the lack of datasets capturing continuous environmental evolution. To fill this gap, we introduce RadioMapMotion, the first large-scale public dataset of continuous RM sequences generated from physically consistent vehicle trajectories. As a baseline for this task, we propose RadioLSTM, a UNet architecture based on Convolutional Long Short-Term Memory (ConvLSTM) and designed for multi-step sequence forecasting. Experimental evaluations show that RadioLSTM achieves higher prediction accuracy and structural fidelity compared to representative baseline methods. Furthermore, the model exhibits a low inference latency, indicating its potential suitability for real-time network operations. Our project will be publicly released at: https://github.com/UNIC-Lab/RadioMapMotion upon paper acceptance.

cs.nics.ai
cs cs 11-25 00:00

Evaluating Device-First Continuum AI (DFC-AI) for Autonomous Operations in the Energy Sector

arXiv:2511.17528v1 Announce Type: new Abstract: Industrial automation in the energy sector requires AI systems that can operate autonomously regardless of network availability, a requirement that cloud-centric architectures cannot meet. This paper evaluates the application of Device-First Continuum AI (DFC-AI) to critical energy sector operations. DFC-AI, a specialized architecture within the Hybrid Edge Cloud paradigm, implements intelligent agents using a microservices architecture that originates at end devices and extends across the computational continuum. Through comprehensive simulations of energy sector scenarios including drone inspections, sensor networks, and worker safety systems, we demonstrate that DFC-AI maintains full operational capability during network outages while cloud and gateway-based systems experience complete or partial failure. Our analysis reveals that zero-configuration GPU discovery and heterogeneous device clustering are particularly well-suited for energy sector deployments, where specialized nodes can handle intensive AI workloads for entire fleets of inspection drones or sensor networks. The evaluation shows that DFC-AI achieves significant latency reduction and energy savings compared to cloud architectures. Additionally, we find that gateway based edge solutions can paradoxically cost more than cloud solutions for certain energy sector workloads due to infrastructure overhead, while DFC-AI can consistently provide cost savings by leveraging enterprise-owned devices. These findings, validated through rigorous statistical analysis, establish that DFC-AI addresses the unique challenges of energy sector operations, ensuring intelligent agents remain available and functional in remote oil fields, offshore platforms, and other challenging environments characteristic of the industry.

cs.nics.ai
cs cs 11-25 00:00

A Dynamic Take on Window Management

arXiv:2511.17516v1 Announce Type: new Abstract: On modern computers with graphical user interfaces, application windows are managed by a window manager, a core component of the desktop environment. Mainstream operating systems such as Microsoft Windows and Apple's macOS employ window managers, where users rely on a mouse or trackpad to manually resize, reposition, and switch between overlapping windows. This approach can become inefficient, particularly on smaller screens such as laptops, where frequent window adjustments disrupt workflow and increase task completion time. An alternative paradigm, dynamic window management, automatically arranges application windows into non-overlapping layouts. These systems reduce the need for manual manipulation by providing intelligent placement strategies and support for multiple workspaces. Despite their potential usability benefits, dynamic window managers remain niche, primarily available on Linux systems and rarely enabled by default. This study evaluates the usability of dynamic window managers in comparison to conventional floating window systems. We developed a prototype dynamic window manager that incorporates configurable layouts and workspace management, and we conducted both heuristic evaluation and statistical testing to assess its effectiveness. Our findings indicate that dynamic window managers significantly improve task completion time in multi-window workflows by 37.83%. By combining cognitive heuristics with empirical performance measures, this work highlights the potential of dynamic window management as a viable alternative to traditional floating window systems and contributes evidence-based insights to the broader field of human-computer interaction (HCI).

cs.hc
cs cs 11-25 00:00

RI-PIENO -- Revised and Improved Petrol-Filling Itinerary Estimation aNd Optimization

arXiv:2511.17517v1 Announce Type: new Abstract: Efficient energy provisioning is a fundamental requirement for modern transportation systems, making refueling path optimization a critical challenge. Existing solutions often focus either on inter-vehicle communication or intra-vehicle monitoring, leveraging Intelligent Transportation Systems, Digital Twins, and Software-Defined Internet of Vehicles with Cloud/Fog/Edge infrastructures. However, integrated frameworks that adapt dynamically to driver mobility patterns are still underdeveloped. Building on our previous PIENO framework, we present RI-PIENO (Revised and Improved Petrol-filling Itinerary Estimation aNd Optimization), a system that combines intra-vehicle sensor data with external geospatial and fuel price information, processed via IoT-enabled Cloud/Fog services. RI-PIENO models refueling as a dynamic, time-evolving directed acyclic graph that reflects both habitual daily trips and real-time vehicular inputs, transforming the system from a static recommendation tool into a continuously adaptive decision engine. We validate RI-PIENO in a daily-commute use case through realistic multi-driver, multi-week simulations, showing that it achieves significant cost savings and more efficient routing compared to previous approaches. The framework is designed to leverage emerging roadside infrastructure and V2X communication, supporting scalable deployment within next-generation IoT and vehicular networking ecosystems.

cs.ni
cs cs 11-25 00:00

Serv-Drishti: An Interactive Serverless Function Request Simulation Engine and Visualiser

arXiv:2511.17518v1 Announce Type: new Abstract: The rapid adoption of serverless computing necessitates a deeper understanding of its underlying operational mechanics, particularly concerning request routing, cold starts, function scaling, and resource management. This paper presents Serv-Drishti, an interactive, open-source simulation tool designed to demystify these complex behaviours. Serv-Drishti simulates and visualises the journey of a request through a representative serverless platform, from the API Gateway and intelligent Request Dispatcher to dynamic Function Instances on resource-constrained Compute Nodes. Unlike simple simulators, Serv-Drishti provides a robust framework for comparative analysis. It features configurable platform parameters, multiple request routing and function placement strategies, and a comprehensive failure simulation module. This allows users to not only observe but also rigorously analyse system responses under various loads and fault conditions. The tool generates real-time performance graphs and provides detailed data exports, establishing it as a valuable resource for research, education, and the design analysis of serverless architectures.

cs.ni
cs cs 11-25 00:00

Joint Edge Server Deployment and Computation Offloading: A Multi-Timescale Stochastic Programming Framework

arXiv:2511.17524v1 Announce Type: new Abstract: Mobile Edge Computing (MEC) is a promising approach for enhancing the quality-of-service (QoS) of AI-enabled applications in the B5G/6G era, by bringing computation capability closer to end-users at the network edge. In this work, we investigate the joint optimization of edge server (ES) deployment, service placement, and computation task offloading under the stochastic information scenario. Traditional approaches often treat these decisions as equal, disregarding the differences in information realization. However, in practice, the ES deployment decision must be made in advance and remain unchanged, prior to the complete realization of information, whereas the decisions regarding service placement and computation task offloading can be made and adjusted in real-time after information is fully realized. To address such temporal coupling between decisions and information realization, we introduce the stochastic programming (SP) framework, which involves a strategic-layer for deciding ES deployment based on (incomplete) stochastic information and a tactical-layer for deciding service placement and task offloading based on complete information realization. The problem is challenging due to the different timescales of two layers' decisions. To overcome this challenge, we propose a multi-timescale SP framework, which includes a large timescale (called period) for strategic-layer decision-making and a small timescale (called slot) for tactical-layer decision making. Moreover, we design a Lyapunov-based algorithm to solve the tactical-layer problem at each time slot, and a Markov approximation algorithm to solve the strategic-layer problem in every time period.

cs.ni
cs cs 11-25 00:00

Quantifying Multimedia Streaming Quality: A Practical Analysis using PIE and Flow Queue PIE

arXiv:2511.17525v1 Announce Type: new Abstract: The exponential growth of multimedia streaming services over the Internet emphasizes the increasing significance of ensuring a seamless and high-quality streaming experience for users. Dynamic Adaptive Streaming over HTTP (DASH) has emerged as a popular solution for delivering multimedia content over variable network conditions. However, challenges such as network congestion, intermittent packet losses, and varying network load continue to impact the Quality of Experience (QoE) perceived by the users. In this work, the main goal is to evaluate the effectiveness of using queue management and flow isolation techniques in terms of improving the overall QoE for DASH based multimedia streaming applications. Proportional Integral controller Enhanced (PIE) and Flow Queue PIE (FQ-PIE) are used as queue management and flow isolation mechanisms, respectively. The most distinctive aspect of this work is our assessment of QoE for multimedia streaming applications when multipath transport protocols, like Multipath TCP (MPTCP), are employed. Network Stack Tester (NeST), a Python based network emulator built on top of Linux network namespaces, has been used to perform the experiments. The parameters used for evaluating the QoE include bitrate, bitrate switches, throughput, Round Trip Time (RTT), and application buffer level. We observe that flow isolation techniques, combined with queue management and multipath transport, significantly improve the QoE for multimedia applications.

cs.ni
cs cs 11-25 00:00

Bunny Hops and Blockchain Stops: Cross-Chain MEV Detection With N-Hops

arXiv:2511.17527v1 Announce Type: new Abstract: This student paper introduces a novel methodology for the detection and analysis of multihop cross-chain arbitrage opportunities, wherein multihop denotes arbitrage sequences involving more than two transactional steps across distinct blockchain networks, executed using sequence-dependent strategies. Utilizing a comprehensive dataset comprising over 2.4 billion transactions recorded between September 2023 and August 2024 (encompassing 12 blockchain platforms and 45 cross-chain bridges) we design and implement an algorithm capable of identifying, sequence-dependent arbitrage paths spanning multiple ecosystems. Our empirical analysis demonstrates that such arbitrage opportunities are exceedingly infrequent, underscoring the inherent challenges associated with multihop execution in cross-chain environments.

cs.oh
cs cs 11-25 00:00

Time-Series Foundation Models for ISP Traffic Forecasting

arXiv:2511.17529v1 Announce Type: new Abstract: Accurate network-traffic forecasting enables proactive capacity planning and anomaly detection in Internet Service Provider (ISP) networks. Recent advances in time-series foundation models (TSFMs) have demonstrated strong zero-shot and few-shot generalization across diverse domains, yet their effectiveness for computer networking remains unexplored. This paper presents a systematic evaluation of a TSFM, IBM's Tiny Time Mixer (TTM), on the CESNET-TimeSeries24 dataset, a 40-week real-world ISP telemetry corpus. We assess TTM under zero-shot and few-shot settings across multiple forecasting horizons (hours to days), aggregation hierarchies (institutions, subnets, IPs), and temporal resolutions (10-minute and hourly). Results show that TTM achieves consistent accuracy (RMSE 0.026-0.057) and stable $R^2$ scores across horizons and context lengths, outperforming or matching fully trained deep learning baselines such as GRU and LSTM. Inference latency remains under 0.05s per 100 points on a single MacBook Pro using CPU-only computation, confirming deployability without dedicated GPU or MPS acceleration. These findings highlight the potential of pretrained TSFMs to enable scalable, efficient, and training-free forecasting for modern network monitoring and management systems.

cs.ni
AI速览助手