|国家预印本平台
首页|OnPrem.LLM: A Privacy-Conscious Document Intelligence Toolkit

OnPrem.LLM: A Privacy-Conscious Document Intelligence Toolkit

OnPrem.LLM: A Privacy-Conscious Document Intelligence Toolkit

来源:Arxiv_logoArxiv
英文摘要

We present OnPrem$.$LLM, a Python-based toolkit for applying large language models (LLMs) to sensitive, non-public data in offline or restricted environments. The system is designed for privacy-preserving use cases and provides prebuilt pipelines for document processing and storage, retrieval-augmented generation (RAG), information extraction, summarization, classification, and prompt/output processing with minimal configuration. OnPrem$.$LLM supports multiple LLM backends -- including llama$.$cpp, Ollama, vLLM, and Hugging Face Transformers -- with quantized model support, GPU acceleration, and seamless backend switching. Although designed for fully local execution, OnPrem$.$LLM also supports integration with a wide range of cloud LLM providers when permitted, enabling hybrid deployments that balance performance with data control. A no-code web interface extends accessibility to non-technical users.

Arun S. Maiya

计算技术、计算机技术

Arun S. Maiya.OnPrem.LLM: A Privacy-Conscious Document Intelligence Toolkit[EB/OL].(2025-05-12)[2025-07-01].https://arxiv.org/abs/2505.07672.点此复制

评论