🎉 ai-infra v1.0 is here — Production-ready AI/LLM infrastructure
What's new
nfrax logonfrax

Infrastructure that just works. Ship products, not boilerplate.

Frameworks

  • svc-infra
  • ai-infra
  • fin-infra
  • robo-infra

Resources

  • Getting Started
  • What's New
  • Contributing

Community

  • GitHub

© 2026 nfrax. All rights reserved.

nfrax logonfrax
Start HereWhat's New
GitHub
ai-infra / API Reference

SemanticSimilarity

from ai_infra.eval import SemanticSimilarity
View source
ai_infra.eval
Extends:Evaluator[str, str]

Evaluate semantic similarity between output and expected output. Uses ai_infra.Embeddings to compute cosine similarity between the output and expected_output embeddings.

Args

provider: Embedding provider (openai, google, voyage, cohere, huggingface). If None, auto-detects from environment. model: Embedding model name. Uses provider default if not specified. threshold: Minimum similarity score to pass (0.0-1.0). Default: 0.8. embeddings: Pre-configured Embeddings instance. If provided, `provider` and `model` are ignored.

Example

>>> from ai_infra.eval.evaluators import SemanticSimilarity >>> from pydantic_evals import Case, Dataset >>> >>> dataset = Dataset( ... cases=[ ... Case( ... inputs="What is the capital of France?", ... expected_output="Paris is the capital", ... ), ... ], ... evaluators=[SemanticSimilarity(threshold=0.7)], ... )

Returns

EvaluationReason with: - value: float (similarity score 0.0-1.0) - reason: Explanation of the score and pass/fail

Constructor
SemanticSimilarity(provider: str | None = None, model: str | None = None, threshold: float = 0.8, embeddings: Embeddings | None = None) -> None
ParameterTypeDefaultDescription
providerstr|NoneNone—
modelstr|NoneNone—
thresholdfloat0.8—
embeddingsEmbeddings |NoneNone—

Methods

On This Page

Constructorevaluate