llmware offers a wide range of examples to cover the lifecycle of building RAG and Agent based applications using small language models:

  • Parsing examples - ~14 stand-alone parsing examples for all common document types, including options for parsing in memory, outputting to JSON, parsing custom configured CSV and JSON files, running OCR on embedded images found in documents, table extraction, image extraction, text chunking, zip files, and web sources.
  • Embedding examples - ~15 stand-alone embedding examples to show how to use ~10 different vector databases and wide range of leading open source embedding models (including sentence transformers).
  • Retrieval examples - ~10 stand-alone examples illustrating different query and retrieval techniques - semantic queries, text queries, document filters, page filters, ‘hybrid’ queries, author search, using query state, and generating bibliographies.
  • Dataset examples - ~5 stand-alone examples to show ‘next steps’ of how to leverage a Library to re-package content into various datasets and automated NLP analytics.
  • Fast start example #1-Parsing - shows the basics of parsing.
  • Fast start example #2-Embedding - shows the basics of building embeddings.
  • CustomTable examples - ~5 examples to start building structured tables that can be used in conjunction with LLM-based workflows.

  • Models examples - ~20 examples showing a wide range of different model inferences and use cases, including the ability to integrate Ollama models, OpenChat (e.g., LMStudio) models, using LLama-3 and Phi-3, bringing your own models into the ModelCatalog, and configuring sampling settings.
  • Prompts examples - ~5 examples that illustrate how to use Prompt as an integrated workflow for integrating knowledge sources, managing prompt history, and applying fact-checking.
  • SLIM-Agents examples - ~20 examples showing how to build multi-model, multi-step Agent processes using locally-running SLIM function calling models.
  • Fast start example #3-Prompts and Models - getting started with model inference.

Table of contents