Open-domain question answering (ODQA) has emerged as a critical area in natural language processing, enabling systems to respond to user queries by retrieving relevant information from vast unstructured text corpora. A fundamental challenge in ODQA lies in the retrieval of passages that contain or support the correct answer, especially when queries and source texts share limited lexical overlap. Traditional sparse retrieval methods such as TF-IDF or BM25, although fast, often fail to capture semantic relevance.This paper presents a Dense Passage Retrieval (DPR) framework that leverages dual-encoder architectures to generate dense vector representations for both questions and passages. By training the encoders using a contrastive learning approach on question-passage pairs, DPR facilitates semantic similarity matching in an embedding space. The model employs pre-trained transformer architectures like BERT and fine-tunes them for retrieval tasks.We evaluate DPR on standard ODQA benchmarks, including Natural Questions and TriviaQA, where it consistently outperforms sparse retrieval methods in top-k retrieval accuracy. In addition, integrating DPR with a reader module significantly enhances the end-to-end QA performance, demonstrating its applicability in real-time QA pipelines. The results confirm that dense retrieval not only bridges the gap between lexical and semantic matching but also enables scalable and efficient information access in large-scale QA systems.