Table of Contents

Intentum Documentation (EN)

Why you're reading this page — This page explains what Intentum is, what it does, and where to start. If you're new to the concept, you're in the right place; setup and your first try are also covered here.


What is Intentum? (One sentence)

Intentum is a library that analyzes user and behavior events in your app, infers intent, and lets you take automatic decisions (allow, warn, block, etc.) based on that intent. For example, it can detect and block a suspicious account takeover attempt or speed up a frequently used flow.

Everyday analogy: Think of it like a security guard for your system. The guard watches (cameras, sensors), interprets (is the person familiar? are the actions suspicious?), and decides (open the door, alert, call security). Intentum automates these three steps in software: ObserveInferDecide.


What is Intentum? (Detail)

Intentum is a intent-driven approach for systems where behavior is not fully deterministic: instead of asserting fixed scenario steps (like classic BDD), you observe what happened, infer the user’s or system’s intent (with optional AI embeddings), and decide what to do (Allow, Observe, Warn, Block) via policy rules.

  • Behavior Space — You record events (e.g. login, retry, submit). No rigid “Given/When/Then” steps.
  • Intent — A model infers intent and confidence (High / Medium / Low) from that behavior.
  • Policy — Rules map intent to decisions: e.g. “high confidence → Allow”, “too many retries → Block”.

So: observe → infer → decide. Useful when flows vary, AI adapts, or you care more about intent than exact steps.


What does the AI do?

In Intentum, the Infer step optionally uses AI (embeddings): it turns behavior keys (e.g. user:login, analyst:prepare_esg_report) into vectors, then produces a confidence level (High / Medium / Low / Certain) and signals from a similarity score.

Step What happens
Embedding Each actor:action key gets a vector + score from an embedding provider. Mock (local/test) = hash-based, deterministic; real provider (OpenAI, Gemini, Mistral, Azure, Claude) = semantic vectors, so the same behavior can yield slightly different confidence across models.
Similarity A similarity engine aggregates all embeddings into a single score (e.g. average). That score is mapped to a confidence level.
Intent LlmIntentModel produces an Intent (Confidence + Signals) from this score; the policy returns Allow / Observe / Warn / Block based on that intent.

Examples usually use Mock (no API key). To try real AI, set the right environment variables and use a provider; see Providers, How to use AI providers, and Setup – real provider. Samples (samples/) are full showcase apps (many scenarios, Web API with infer, explain, explain-tree, playground, analytics, timeline); examples (examples/) are minimal single-use-case projects (fraud-intent, customer-intent, ai-fallback-intent, chained-intent, time-decay-intent, vector-normalization, greenwashing-intent). Templates: dotnet new intentum-webapi, intentum-backgroundservice, intentum-function — see Setup – Create from template. Examples overview and Tests overview are in the docs sidebar. Intent may include optional Reasoning (e.g. which rule matched or fallback used).


What replaced Given/When/Then?

In classic BDD you write Given (precondition), When (action), Then (assertion). That assumes fixed steps and a single pass/fail outcome. Intentum does not use Given/When/Then; it uses a different flow that fits non-deterministic and intent-based systems.

BDD (Given/When/Then) Intentum (replacement)
Given — fixed preconditions Observe — you record what actually happened (events like login, retry, submit) in a BehaviorSpace. No fixed “given” state; you capture real behavior.
When — single action Same Observe — events are the “when”; you add them with space.Observe(actor, action). Multiple events, order preserved.
Then — one assertion, pass/fail Infer + Decide — a model infers intent and confidence (High/Medium/Low) from the behavior, then a policy decides the outcome: Allow, Observe, Warn, or Block. So instead of “then X should be true” you get “given this behavior, intent is Y, decision is Z.”

In short: Given/When/Then is gone; in its place you have Observe (record events) → Infer (intent + confidence) → Decide (policy outcome). You describe what happened, let the model interpret intent, and let rules choose the decision. See API Reference for the types and Scenarios for examples.


Who is it for?

Use Intentum when:

  • Flows are non-deterministic or AI-driven.
  • You want to reason about intent rather than strict pass/fail steps.
  • You need policy-based decisions (allow / observe / warn / block) from observed behavior.

Skip Intentum when:

  • Your system is fully deterministic with stable requirements.
  • You only have small scripts or one-off tools where behavior drift doesn’t matter.

Learning path

In what order to proceed:

  1. Getting started — Learn the core flow and Intentum.Core / Runtime / AI. → Architecture, Setup.
  2. Next — Writing policies, scenarios, real AI provider. → Scenarios, Providers.
  3. Data — Storing and analyzing intent history. → Setup – Repository structure, Advanced Features (Analytics).
  4. Advanced — Analytics, Clustering, Explainability, Policy Store, etc. → Advanced Features.

Documentation contents

Page What you’ll find
Architecture Core flow (Observe → Infer → Decide), package layout, inference pipeline, persistence/analytics/rate-limiting/multi-tenancy flows (Mermaid diagrams).
Audience & use cases Project types, user profiles, low/medium/high example test cases (AI and normal), sector-based examples.
Setup Prerequisites, NuGet install, first project walkthrough, env vars.
API Reference Main types (BehaviorSpace, Intent, Policy, providers) and how they fit together.
Providers OpenAI, Gemini, Mistral, Azure OpenAI, Claude — env vars and DI setup.
How to use AI providers Easy / medium / hard usage examples for each AI provider (Mock, OpenAI, Gemini, Mistral, Azure, Claude).
Usage Scenarios Example flows (payment with retries, suspicious retries, policy order).
CodeGen Scaffold CQRS + Intentum projects; generate Features from test assembly or YAML spec.
Testing Unit tests, coverage, error cases.
Local integration tests Run VerifyAI (all providers) or per-provider integration tests locally with .env and scripts.
Coverage How to generate and view coverage; SonarCloud findings and quality gate.
Benchmarks BenchmarkDotNet: ToVector, Infer, PolicyEngine; run and refresh docs with ./scripts/run-benchmarks.sh.
Examples overview Examples and samples by difficulty (simple / medium / hard) and real-life use cases.
Tests overview Test projects, sample links, and how to run unit and integration tests.
Advanced Features Similarity engines, vector normalization, rule-based and chained intent models, fluent APIs, caching, Intent Timeline, Intent Tree, Context-Aware Policy, Policy Store, Behavior Pattern Detector, Multi-Stage Model, Scenario Runner, stream processing, OpenTelemetry tracing, rate limiting, analytics, middleware, observability, batch processing, persistence.

5-minute first try (Quick start)

Concrete scenario: Alex, within one minute: 1) tried login 3 times (2 failed, 1 succeeded), 2) triggered a "submit" action after logging in. The system will observe this behavior, infer intent and confidence, and the policy will return Allow or Observe.

  1. Install core packages

    dotnet add package Intentum.Core
    dotnet add package Intentum.Runtime
    dotnet add package Intentum.AI
    
  2. Run the sample (no API key needed; uses mock provider)

    dotnet run --project samples/Intentum.Sample
    
  3. Read Setup for a minimal “first project” and API Reference for the main types and flow.

You'll know it worked when — After running, you see a confidence level (e.g. High, Medium) and a decision (Allow, Observe, Warn, or Block) in the console. Example: Confidence: High, Decision: Allow.


How to

  • How do I run the first scenario? — Run the console sample with dotnet run --project samples/Intentum.Sample (mock provider, no API key). For a full API and Blazor UI (intent infer, rate limiting, analytics, Overview, Commerce, Explain, FraudLive, Sustainability, Timeline, PolicyLab, Sandbox), run dotnet run --project samples/Intentum.Sample.Blazor. See Setup for repo structure and endpoints.
  • How do I add a policy? — Create an IntentPolicy, add rules in order with .AddRule(PolicyRule(...)) (e.g. Block first, then Allow). After inference, call intent.Decide(policy). See Scenarios and API Reference for details.
  • How do I model classic flows (payment, login, support)? — Record events with space.Observe(actor, action) (e.g. "user", "login"; "user", "retry"; "user", "submit"). The model infers intent from behavior; the policy returns Allow, Observe, Warn, or Block. Usage Scenarios includes both classic (payment, e‑commerce) and ESG examples.
  • How do I build scenarios with AI? — The same Observe flow works with Mock or a real provider (OpenAI, Gemini, etc.); use meaningful behavior keys and base policy on confidence and signals. Details and tips: Scenarios – How to build scenarios with AI.

For more examples and rule ordering, see Scenarios and Audience & use cases.


Global usage notes

  • API keys — Use environment variables or a secret manager; never commit keys.
  • Regions and latency — Consider provider endpoint location and rate limits.
  • Production — Avoid logging raw provider requests/responses.

For full API method signatures, see the generated API site.