国家预印本平台
中国首发,全球知晓
针对传统应急决策系统存在的数据来源单一、智能化程度低、决策路径固化等问题,我们提出一种基于多Agent协作的智能应急决策支持系统。系统采用了我们提出的Plan-Execute-Monitor循环架构,集成Web搜索、知识图谱查询和地理信息服务等多源信息融合模块,构建了思维树推理驱动的多智能体协作机制。通过引入计划导向路径生成、多维度进展评估和自适应执行监控等关键技术,解决了传统多智能体系统指令遵循失败、步骤重复和上下文丢失等问题。我们参考GAIA评估框架,基于政府开源文件和网络爬虫自主构建了包含135个任务的GAIA-应急管理领域数据集。在该数据集上的实验结果表明,PEM架构准确率达到48.7\%,比传统迭代搜索架构的28.6\%高出20.1个百分点,平均执行时间降低63.7\%,验证了系统的有效性。
研究采用脑电微状态分析,探索复合远距离联想任务中顿悟问题解决的微观动态神经加工模式。主要结果表明:在问题呈现初期,相比于未解决条件,顿悟解决和非顿悟解决均表现出更高频率的微状态B(与视觉加工相关)以及更多的微状态B与D(与执行功能网络有关)的转换。相比于非顿悟解决条件,顿悟解决在中、后期则表现出更高频率的微状态C(与默认模式网络有关),且微状态A(与感知、听觉加工相关)、C和D三者存在较高的相互转移概率。本研究初步考察了顿悟问题解决的微观动态神经加工模式,为揭示顿悟问题解决中执行功能调控下多种认知活动的复杂交互过程提供了电生理学依据,为无意识加工可能随顿悟问题解决进程如何变化提供了一定的启示。
人胃腺癌是胃癌的主要病理类型,其化疗耐药性是导致治疗失败及患者预后不良的关键制约因素。ATP1B1(钠钾ATP酶β1亚基)不仅是维持细胞膜电位与离子稳态的核心蛋白,近年来更被发现在肿瘤耐药性中扮演多维调控角色。本文系统综述了ATP1B1的分子特征及其在人胃腺癌中的异常表达,重点阐述了其通过影响药物外排泵功能、调控细胞凋亡进程、介导上皮-间质转化(EMT)及整合非编码RNA网络等机制促进化疗耐药的研究进展。在此基础上,文章进一步剖析了该领域当前面临的技术挑战与理论争议,并展望了将ATP1B1发展为耐药预测生物标志物及新型治疗靶点的临床转化前景,旨在为克服人胃腺癌耐药困境提供新视角与策略。
We construct two quantum error correction codes for pure SU(2) lattice gauge theory in the electric basis truncated at the electric flux $j_{\rm max}=1/2$, which are applicable on quasi-1D plaquette chains, 2D honeycomb and 3D triamond and hyperhoneycomb lattices. The first code converts Gauss's law at each vertex into a stabilizer while the second only uses half vertices and is locally the carbon code. Both codes are able to correct single-qubit errors. The electric and magnetic terms in the SU(2) Hamiltonian are expressed in terms of logical gates in both codes. The logical-gate Hamiltonian in the first code exactly matches the spin Hamiltonian for gauge singlet states found in previous work.
Today's denoising diffusion models do not "denoise" in the classical sense, i.e., they do not directly predict clean images. Rather, the neural networks predict noise or a noised quantity. In this paper, we suggest that predicting clean data and predicting noised quantities are fundamentally different. According to the manifold assumption, natural data should lie on a low-dimensional manifold, whereas noised quantities do not. With this assumption, we advocate for models that directly predict clean data, which allows apparently under-capacity networks to operate effectively in very high-dimensional spaces. We show that simple, large-patch Transformers on pixels can be strong generative models: using no tokenizer, no pre-training, and no extra loss. Our approach is conceptually nothing more than "$\textbf{Just image Transformers}$", or $\textbf{JiT}$, as we call it. We report competitive results using JiT with large patch sizes of 16 and 32 on ImageNet at resolutions of 256 and 512, where predicting high-dimensional noised quantities can fail catastrophically. With our networks mapping back to the basics of the manifold, our research goes back to basics and pursues a self-contained paradigm for Transformer-based diffusion on raw natural data.














