国家预印本平台
中国首发,全球知晓
随着人工智能的快速发展以及Agent、MCP(模型上下文协议)和SKILL架构的出现,人机交互范式发生了重大变革。本文提出、实现并开源了一种面向隧道工程的、基于智能体的全自动化计算方法。相比于传统工作流程需要花费大量时间学习工程软件、从报告中手动提取参数以及进行繁琐的建模和计算,所提出的方法只需通过向智能体发出一次自然语言指令,即可完成复杂计算。
目的探讨郭氏养心益智操联合耳部刮痧联合叙事护理在老年焦虑抑郁患者中的应用效果及护理体会。方法 对1例老年焦虑抑郁患者开展系统护理,包括护理评估、护理诊断、护理计划、中医特色护理、常规护理、叙事护理及护理评价。干预措施以郭氏养心益智操、耳部刮痧为核心,配合饮食、运动、情志、睡眠、安全等综合护理,并全程融入叙事护理外化、解构、改写、见证技术。结果 干预4周后,患者焦虑、抑郁情绪明显改善,睡眠质量提高,负性认知重构,生活质量提升,未发生不良事件。结论 郭氏养心益智操联合耳部刮痧配合叙事护理可有效改善老年焦虑抑郁患者情绪、睡眠及心理状态,体现身心同治、以人为本的护理理念,值得在老年医学科推广。
High-quality 3D streaming from multiple cameras is crucial for immersive experiences in many AR/VR applications. The limited number of views - often due to real-time constraints - leads to missing information and incomplete surfaces in the rendered images. Existing approaches typically rely on simple heuristics for the hole filling, which can result in inconsistencies or visual artifacts. We propose to complete the missing textures using a novel, application-targeted inpainting method independent of the underlying representation as an image-based post-processing step after the novel view rendering. The method is designed as a standalone module compatible with any calibrated multi-camera system. For this we introduce a multi-view aware, transformer-based network architecture using spatio-temporal embeddings to ensure consistency across frames while preserving fine details. Additionally, our resolution-independent design allows adaptation to different camera setups, while an adaptive patch selection strategy balances inference speed and quality, allowing real-time performance. We evaluate our approach against state-of-the-art inpainting techniques under the same real-time constraints and demonstrate that our model achieves the best trade-off between quality and speed, outperforming competitors in both image and video-based metrics.
We introduce FaceCam, a system that generates video under customizable camera trajectories for monocular human portrait video input. Recent camera control approaches based on large video-generation models have shown promising progress but often exhibit geometric distortions and visual artifacts on portrait videos due to scale-ambiguous camera representations or 3D reconstruction errors. To overcome these limitations, we propose a face-tailored scale-aware representation for camera transformations that provides deterministic conditioning without relying on 3D priors. We train a video generation model on both multi-view studio captures and in-the-wild monocular videos, and introduce two camera-control data generation strategies: synthetic camera motion and multi-shot stitching, to exploit stationary training cameras while generalizing to dynamic, continuous camera trajectories at inference time. Experiments on Ava-256 dataset and diverse in-the-wild videos demonstrate that FaceCam achieves superior performance in camera controllability, visual quality, identity and motion preservation.
We find the dispersion relations of two elusive families of core-bound excitations of the Gross-Pitaevskii (GP) vortex, varicose (axisymmetric) and fluting (quadrupole) waves. For wavelengths of order the healing length, these two families -- and the well-known Kelvin wave -- possess an infinite sequence of core-bound, vortex-specific branches whose energies lie below the Bogoliubov dispersion relation. In the short-wavelength limit, these excitations can be interpreted as particles radially bound to the vortex, which acts as a waveguide. In the long-wavelength limit, the fluting waves unbind from the core, the varicose waves reduce to phonons propagating along the vortex, and the fundamental Kelvin wave is the only core-bound vortex-specific excitation. Finally, we propose a realistic spectroscopic protocol for creating and detecting the varicose wave, which we test by direct numerical simulations of the GP equation.














