Glossary

search icon
PostgreSQL is fundamentally a single-node database, relying on ecosystem extensions and derivative architectures for distributed capabilities.
The rapid evolution of Artificial Intelligence (AI), particularly Large Language Models (LLMs), has brought forth impressive generative and reasoning capabilities.
Agentic Search marks a fundamental leap in search technology. This paper provides an in-depth analysis of how AI Agents autonomously plan, iteratively reason, and synthesize complex information to deliver structured, actionable final solutions, thereby redefining the standard for information discovery and problem-solving.
In the age of Artificial Intelligence and Big Data, we are constantly challenged by the need to process and retrieve massive amounts of complex data.
OpenSearch is a community-driven, fully open-source search and analytics suite, forked from Elasticsearch 7.10.2 and Kibana 7.10.2.
With the rapid advancement of Artificial Intelligence and Machine Learning, especially the rise of Large Language Models (LLMs), vector retrieval has become an indispensable part of modern data infrastructure.
In today's data-driven world, effectively storing, retrieving, and analyzing unstructured data such as text, images, and videos is a core challenge. Traditional database systems often struggle to handle this type of data efficiently.
In the rapidly evolving landscape of artificial intelligence and machine learning, the ability to perform fast and accurate similarity searches on massive datasets is crucial. Qdrant has established itself as a leading, high-performance, and production-ready vector similarity search engine and vector database.
As data complexity continuously increases, traditional Single-Vector Search methods—which rely on a single vector to represent a data object—are often insufficient to capture all the nuances of complex entities.
Metadata is defined simply as "data about data." It does not constitute the raw content itself, but rather information that describes, explains, or locates the primary data.
Scalar Filtering, often referred to as Constant Filtering or a form of Predicate Pushdown, is a crucial technique in database query optimization. Its core idea is to significantly reduce the amount of data that needs to be processed by utilizing scalar predicates contained within a query as early as possible, before executing complex operations like table joins, grouping, or sorting.
Product Quantization (PQ) is a technique for compressing high-dimensional vectors in order to make large-scale similarity search feasible. In modern AI systems – from image search engines to recommendation systems – data is often represented as high-dimensional vectors (embeddings).
The Inverted File Index (IVF) is a widely used data structure for approximate nearest neighbor (ANN searches in high-dimensional vector spaces.
In the modern landscape of AI and massive data volumes, the ability to quickly and accurately find data points (represented as high-dimensional vectors) that are "most similar" to a given query—a process known as Nearest Neighbor Search (NNS)—is absolutely crucial.
The K-Nearest Neighbors (KNN) algorithm is a foundational supervised learning method primarily used to address two essential predictive modeling tasks
The rapid advancement of deep learning, particularly Large Language Models (LLMs), has driven a fundamental shift in data representation. Complex data like text, images, and audio are now encoded as high-dimensional, dense vectors (Vector Embeddings).
In the digital age, the greatest challenge for enterprises is no longer acquiring information, but effectively managing, retrieving, and utilizing it.
In the world of search engines, recommendation systems, and large language models, the speed required to sift through massive datasets often conflicts with the need for high accuracy and deep semantic understanding.
In the vast ocean of digital information, finding precisely what you need can often feel like searching for a needle in a haystack. We type a few words into a search bar, expecting the universe to understand our intent, but frequently the results fall short.
In Natural Language Processing (NLP), particularly within Information Retrieval (IR) and semantic similarity tasks, the Bi-Encoder and Cross-Encoder represent the two dominant model architectures.
In the early days of the internet, search relied on precise keyword matching (Sparse Search). But as information exploded and user queries evolved, traditional methods revealed their limitations: they couldn't understand semantics or context.
In the world of information retrieval, Sparse Search (also known as Lexical Search or Keyword Search) stands as the foundational technology that powers traditional search engines.
With the rapid advancement of artificial intelligence technology, Retrieval-Augmented Generation (RAG) has emerged as the core framework for enhancing the output quality of Large Language Models (LLMs) in enterprise-level applications.
Traditional data processing and retrieval systems rely on structured query languages (SQL) or exact keyword matching.
Information Retrieval (IR), in the fields of computing and information science, is defined as the task of identifying and retrieving information system resources that are relevant to a specific information need.
VeloDB primary-key model that writes a Delete Bitmap at ingest so queries skip duplicates; enables sub-second reads.
In the rapidly growing Web3 ecosystem, data has become the most critical asset. From on-chain transaction analysis, DeFi protocol monitoring, and NFT marketplace insights to off-chain user behavior analytics, observability, and A/B testing, Web3 companies face an ever-increasing demand for real-time, sub-second latency, and cost-efficient data infrastructure.
LLM Observability is the comprehensive practice of monitoring, tracking, and analyzing the behavior, performance, and outputs of Large Language Models (LLMs) throughout their entire lifecycle from development to production. It provides real-time visibility into every layer of LLM-based systems, enabling organizations to understand not just what is happening with their AI models, but why specific behaviors occur, ensuring reliable, safe, and cost-effective AI operations.
Filebeat is a lightweight log shipper designed to efficiently forward and centralize log data as part of the Elastic Stack ecosystem. Originally developed by Elastic, Filebeat belongs to the Beats family of data shippers and serves as a crucial component in modern log management pipelines. As organizations increasingly deploy distributed systems, microservices, and cloud-native applications that generate massive volumes of log data across multiple servers and containers, Filebeat provides a reliable, resource-efficient solution for collecting, processing, and forwarding log files to centralized destinations like Elasticsearch, Logstash, or other data processing systems. Unlike heavy-weight log collection tools, Filebeat is specifically designed to consume minimal system resources while maintaining high reliability and performance in production environments.
OLAP (Online Analytical Processing) and OLTP (Online Transaction Processing) are two fundamental paradigms in database systems that serve distinctly different purposes in modern data architecture. OLTP systems are designed to handle high-frequency, real-time transactional operations with an emphasis on data consistency, speed, and concurrent user access for operational business processes. In contrast, OLAP systems are optimized for complex analytical queries, data aggregation, and business intelligence operations that process large volumes of historical data to generate insights. Understanding the differences between these systems is crucial for designing effective data architectures that support both operational efficiency and strategic decision-making in today's data-driven organizations.
Real-time analytics is the ability to process, analyze, and derive insights from data immediately as it arrives, allowing organizations to make instantaneous decisions based on current information. Unlike traditional batch processing, which analyzes historical data hours or days after collection, real-time analytics provides sub-second to sub-minute insights from streaming data sources such as user interactions, IoT sensors, financial transactions, and application logs. As businesses increasingly require immediate responses to changing conditions—from fraud detection and dynamic pricing to personalized recommendations and operational monitoring—real-time analytics has become a strategic necessity for maintaining a competitive edge in today's fast-paced digital economy.
An inverted index is a fundamental data structure used in information retrieval systems and search engines to enable fast full-text search capabilities. Unlike a regular index that maps document IDs to their content, an inverted index reverses this relationship by mapping each unique word or term to a list of documents containing that term. This "inversion" allows search engines to quickly identify which documents contain specific search terms without scanning through entire document collections. Inverted indexes are the backbone of modern search technologies, powering everything from web search engines like Google to database full-text search capabilities in systems like Apache Doris, ClickHouse, and Elasticsearch.
A Cost-Based Optimizer (CBO) represents a sophisticated query optimization framework designed to maximize database performance by systematically evaluating multiple potential execution plans and selecting the one with the lowest estimated computational cost. In contrast to traditional rule-based optimizers, which depend on fixed heuristic rules, the CBO leverages comprehensive statistical metadata—including data distribution, table cardinality, and index availability—to make context-aware, data-driven optimization decisions.
RAG (Retrieval-Augmented Generation) is an AI framework that enhances large language models (LLMs) by combining them with external knowledge retrieval systems. This architecture allows LLMs to access up-to-date, domain-specific information from external databases, documents, or knowledge bases during the generation process, significantly improving the accuracy, relevance, and factuality of AI-generated responses.
Hybrid search is a powerful search approach that combines multiple search methodologies, primarily keyword-based (lexical) search and vector-based (semantic) search, to deliver more comprehensive and accurate search results. By leveraging the strengths of both exact term matching and semantic understanding, hybrid search provides users with relevant results that capture both literal matches and contextual meaning, significantly improving search precision and user satisfaction.
Vector search is a modern search technique that enables finding similar items by converting data into high-dimensional numerical representations called vectors or embeddings. Unlike traditional keyword-based search that matches exact terms, vector search understands semantic meaning and context, allowing users to find relevant content even when exact keywords don't match. This technology powers recommendation systems, similarity search, and AI applications by measuring mathematical distances between vectors in multi-dimensional space.
Semi-structured data is a form of data that sits between structured and unstructured data, containing some organizational properties without conforming to a rigid schema like traditional relational databases. This data format maintains partial organization through tags, metadata, and hierarchical structures while retaining flexibility for varied content representation. As organizations increasingly handle diverse data sources including web content, IoT device outputs, social media feeds, and API responses, semi-structured data has become fundamental to modern data management strategies. Unlike structured data that fits neatly into rows and columns, or unstructured data that lacks any organizational framework, semi-structured data provides a balance of flexibility and organization that enables efficient storage, processing, and analysis across distributed systems and cloud-native architectures.
OpenTelemetry is a 100% free and open-source observability framework designed to provide comprehensive telemetry data collection, processing, and export capabilities for modern distributed systems. Born as a merger of OpenTracing and OpenCensus projects in 2019, OpenTelemetry has become the industry standard for observability instrumentation under the Cloud Native Computing Foundation (CNCF). As organizations increasingly adopt microservices, containerized applications, and cloud-native architectures, OpenTelemetry addresses the critical need for unified observability across complex distributed systems by providing standardized APIs, SDKs, and tools for generating, collecting, and exporting traces, metrics, and logs without vendor lock-in.
Grafana is an open-source analytics and monitoring platform that provides comprehensive data visualization, dashboards, and alerting capabilities for observability across modern IT infrastructure. Originally developed by Torkel Ödegaard in 2014, Grafana has evolved into the leading solution for creating interactive dashboards that unify metrics, logs, traces, and other data sources into coherent visual narratives.
A columnar database is a database management system that stores data organized by columns rather than by rows, fundamentally changing how information is physically stored and accessed on disk. Unlike traditional row-oriented databases where each record's data is stored together, columnar databases group all values for each column together, creating a storage structure optimized for analytical queries and data compression. This approach has become the cornerstone of modern data warehousing and business intelligence platforms, powering cloud analytics services like Amazon Redshift, Google BigQuery, and Snowflake.
Apache Parquet is an open-source columnar storage format optimized for large-scale data processing and analytics. It's widely adopted across big data ecosystems, including Apache Hive, Spark, Doris, Trino, Presto, and many others.
Apache Paimon is a high-performance streaming-batch unified table format and data lake storage system designed specifically for real-time data processing. It supports transactions, consistent views, incremental read/write operations, and schema evolution, providing essential capabilities required by modern data lake architectures.
Apache ORC (Optimized Row Columnar) is an open-source columnar storage format optimized for large-scale data storage and analytics. Developed by Hortonworks in 2013, it has become an Apache top-level project and is widely used in big data ecosystems including Apache Hive, Spark, Presto, Trino, and more.
Apache Iceberg is an open-source large-scale analytical table format initiated by Netflix and donated to Apache, designed to address the limitations of traditional Hive table formats in consistency, performance, and metadata management. In today's lakehouse architectures with multi-engine concurrent access and frequent schema evolution, Iceberg provides ACID transactions, hidden partitioning, time travel capabilities, making it highly sought after.
Apache Hudi (Hadoop Upserts Deletes Incrementals) is an open-source data lake platform originally developed by Uber and became an Apache top-level project in 2019. By providing transactional capabilities, incremental processing, and consistency control for data lakes, it transforms traditional data lakes into modern lakehouses. As users increasingly focus on real-time capabilities, update functionality, and cost efficiency in massive data scenarios, Hudi emerges as the solution to address these pain points.
Apache Hive is a distributed, fault-tolerant data warehouse system built on Hadoop that supports reading, writing, and managing massive datasets (typically at petabyte scale) using HiveQL, an SQL-like language. As big data scales continue to grow exponentially, enterprises increasingly demand familiar SQL interfaces for processing enormous datasets. Hive emerged precisely to address this need, delivering tremendous productivity value.
Delta Lake is an open-source storage format that combines Apache Parquet files with a powerful metadata transaction log. It brings ACID transactions, consistency guarantees, and data versioning capabilities to data lakes. As large-scale data lakes have been widely deployed in enterprises, using Parquet alone cannot solve issues like performance bottlenecks, data consistency, and poor governance. Delta Lake emerged to address these challenges and has become an essential foundation for building modern lakehouse architectures.
Lakehouse (Data Lake + Data Warehouse) is a unified architecture that aims to provide data warehouse-level transactional capabilities, management capabilities, and query performance on top of a data lake foundation. It not only retains the low cost and flexibility of data lakes but also provides the consistency and high-performance analytical capabilities of data warehouses.
Apache Doris is an MPP-based real-time data warehouse known for its high query speed. For queries on large datasets, it returns results in sub-seconds. It supports both high-concurrency point queries and high-throughput complex analysis. It can be used for report analysis, ad-hoc queries, unified data warehouse, and data lake query acceleration. Based on Apache Doris, users can build applications for user behavior analysis, A/B testing platform, log analysis, user profile analysis, and e-commerce order analysis.
An analytics database is a specialized database management system optimized for Online Analytical Processing (OLAP), designed to handle complex queries, aggregations, and analytical workloads across large datasets. Unlike traditional transactional databases that focus on operational efficiency and data consistency, analytics databases prioritize query performance, data compression, and support for multidimensional analysis. Modern analytics databases leverage columnar storage, massively parallel processing (MPP) architectures, and vectorized execution engines to deliver sub-second response times on petabyte-scale datasets, making them essential for business intelligence, data science, and real-time decision-making applications.
A data warehouse is a centralized repository designed to store, integrate, and analyze large volumes of structured data from multiple sources within an organization. Unlike traditional databases optimized for transactional processing, data warehouses are specifically architected for analytical processing (OLAP), enabling complex queries and historical data analysis. In the modern data-driven landscape, data warehouses serve as the foundation for business intelligence, reporting, and decision-making processes across enterprises of all sizes.