from fin_infra.categorization.llm_layer import LLMCategorizerLLM-based transaction categorization (Layer 4). Uses ai-infra.llm.LLM with few-shot prompting and structured output. Caches predictions via svc-infra.cache to minimize API costs.
provider: LLM provider ("google_genai", "openai", "anthropic") model_name: Model name (e.g., "gemini-2.5-flash", "gpt-4.1-mini") max_cost_per_day: Daily budget cap in USD (default $0.10) max_cost_per_month: Monthly budget cap in USD (default $2.00) cache_ttl: Cache TTL in seconds (default 24 hours) enable_personalization: Enable user context injection (default False)
>>> categorizer = LLMCategorizer( ... provider="google_genai", ... model_name="gemini-2.5-flash", ... ) >>> prediction = await categorizer.categorize("UNKNOWN COFFEE CO") >>> print(prediction.category, prediction.confidence) Coffee Shops 0.85