Retrieval & Query
Query libraries with mix of text, semantic, hybrid, metadata, and custom filters. The retrieval.py module implements the Query
class, which is the primary way that search and retrieval is performed. Each Query
object, when constructed, requires that a Library is passed as a mandatory parameter in the constructor. The Query object will operate against that Library, and have access to all of Library’s specific attributes, metadata and methods.
Retrievals in llmware leverage the Library abstraction as the primary unit against which a particular query or retrieval is executed. This provides the ability to have multiple distinct knowledge-bases, potentially aligned to different use cases, and/or users, accounts and permissions.
Executing Queries
from llmware.retrieval import Query
from llmware.library import Library
# step 1 - load a previously created library
lib = Library().load_library("my_library")
# step 2 - create a query object
q = Query(lib)
# step 3 - run lots of different queries (many other options in the examples)
# basic text query
results1 = q.text_query("text query", result_count=20, exact_mode=False)
# semantic query
results2 = q.semantic_query("semantic query", result_count=10)
# combining a text query restricted to only certain documents in the library and "exact" match to the query
results3 = q.text_query_with_document_filter("new query", {"file_name": "selected file name"}, exact_mode=True)
# to apply a specific embedding (if multiple on library), pass the names when creating the query object
q2 = Query(lib, embedding_model_name="mini_lm_sbert", vector_db="milvus")
results4 = q2.semantic_query("new semantic query")
Need help or have questions?
Check out the llmware videos and GitHub repository.
Reach out to us on GitHub Discussions.
About the project
llmware
is © 2023-2024 by AI Bloks.
Contributing
Please first discuss any change you want to make publicly, for example on GitHub via raising an issue or starting a new discussion. You can also write an email or start a discussion on our Discrod channel. Read more about becoming a contributor in the GitHub repo.
Code of conduct
We welcome everyone into the llmware
community. View our Code of Conduct in our GitHub repository.
llmware
and AI Bloks
llmware
is an open source project from AI Bloks - the company behind llmware
. The company offers a Software as a Service (SaaS) Retrieval Augmented Generation (RAG) service. AI Bloks was founded by Namee Oberst and Darren Oberst in Oktober 2022.
License
llmware
is distributed by an Apache-2.0 license.