Table of Contents

Advanced Features (EN)

Why you're reading this page: This page explains Intentum's advanced features (caching, clustering, events, A/B experiments, multi-tenancy, explainability, simulation, policy store, etc.). It is the right place if you know the basic flow (Observe → Infer → Decide) and need production or extension features.

This page covers advanced features added in recent versions: similarity engines, fluent APIs, caching, testing utilities, and more.


Packages and features overview

Extended packages (beyond Core, Runtime, AI, and providers) and where they are documented on this page:

Package What it is What it does Section
Intentum.AI.Caching.Redis Redis-based embedding cache IEmbeddingCache for multi-node production; store embeddings in Redis Redis (Distributed) Cache
Intentum.Clustering Intent clustering Groups intent history for pattern detection; IIntentClusterer, AddIntentClustering Intent Clustering
Intentum.Events Webhook / event system Dispatches intent events (IntentInferred, PolicyDecisionChanged) via HTTP POST; IIntentEventHandler, WebhookIntentEventHandler Webhook / Event System
Intentum.Experiments A/B testing Traffic split across model/policy variants; IntentExperiment, ExperimentResult, AddVariant A/B Experiments
Intentum.MultiTenancy Multi-tenancy Tenant-scoped behavior space repository; ITenantProvider, TenantAwareBehaviorSpaceRepository Multi-tenancy
Intentum.Explainability Intent explainability Signal contribution scores, human-readable summary; IIntentExplainer, IntentExplainer Intent Explainability
Intentum.Simulation Intent simulation Synthetic behavior spaces for testing; IBehaviorSpaceSimulator, BehaviorSpaceSimulator Intent Simulation
Intentum.Versioning Policy versioning Policy/model version tracking for rollback; IVersionedPolicy, PolicyVersionTracker Policy Versioning
Intentum.Runtime.PolicyStore Declarative policy store Load policies from JSON/file with hot-reload; IPolicyStore, FilePolicyStore, SafeConditionBuilder Policy Store
Intentum.Explainability (extended) Intent decision tree Explain policy path as a tree; IIntentTreeExplainer, IntentTreeExplainer Intent Tree
Intentum.Analytics (extended) Intent timeline, pattern detector Entity timeline, behavior patterns, anomalies; GetIntentTimelineAsync, IBehaviorPatternDetector Intent Timeline, Behavior Pattern Detector
Intentum.Simulation (extended) Scenario runner Run defined scenarios through model + policy; IScenarioRunner, IntentScenarioRunner Scenario Runner
Intentum.Core (extended) Multi-stage model, context-aware policy Chain models with thresholds; policy with context; MultiStageIntentModel, ContextAwarePolicyEngine Multi-Stage Intent, Context-Aware Policy
Intentum.Core.Streaming Real-time intent stream Consume behavior event batches; IBehaviorStreamConsumer, MemoryBehaviorStreamConsumer Stream Processing
Intentum.Observability (extended) OpenTelemetry tracing Spans for infer and policy.evaluate; IntentumActivitySource Observability

Templates: dotnet new intentum-webapi, intentum-backgroundservice, intentum-function — see Setup – Create from template. Sample Web also exposes Playground (compare models): POST /api/intent/playground/compare, and Intent Tree: POST /api/intent/explain-tree.

Core packages (Intentum.Core, Intentum.Runtime, Intentum.AI, providers, Testing, AspNetCore, Persistence, Analytics) are listed in Architecture and README.


Similarity Engines

Intentum provides multiple similarity engines for combining embeddings into intent scores.

Source weights (dimension counts)

When the similarity engine supports it, LlmIntentModel passes dimension counts (event counts per actor:action) as weights. So "user:login.failed" occurring 5 times weighs more than once. Engines that implement the overload CalculateIntentScore(embeddings, sourceWeights) use these weights; otherwise they ignore them (e.g. simple average).

SimpleAverageSimilarityEngine (Default)

The default engine that averages embedding scores; when sourceWeights (e.g. from BehaviorVector.Dimensions) are provided, it uses a weighted average.

var engine = new SimpleAverageSimilarityEngine();

WeightedAverageSimilarityEngine

Applies custom weights to embeddings based on their source (actor:action). Useful when certain behaviors should have more influence.

var weights = new Dictionary<string, double>
{
    { "user:login", 2.0 },      // Login is twice as important
    { "user:submit", 1.5 },     // Submit is 1.5x important
    { "user:retry", 0.5 }        // Retry is less important
};
var engine = new WeightedAverageSimilarityEngine(weights, defaultWeight: 1.0);

TimeDecaySimilarityEngine

Applies time-based decay to embeddings. More recent events have higher influence on intent inference.

When used with LlmIntentModel, time decay is applied automatically: the model detects ITimeAwareSimilarityEngine and calls CalculateIntentScoreWithTimeDecay(behaviorSpace, embeddings) so you do not need to wire it manually.

var engine = new TimeDecaySimilarityEngine(
    halfLife: TimeSpan.FromHours(1),
    referenceTime: DateTimeOffset.UtcNow);

var intentModel = new LlmIntentModel(embeddingProvider, engine);
var intent = intentModel.Infer(space); // time decay applied automatically

For direct use (e.g. custom pipeline):

var score = engine.CalculateIntentScoreWithTimeDecay(behaviorSpace, embeddings);

CosineSimilarityEngine

Uses cosine similarity between embedding vectors. Calculates the angle between vectors to measure similarity.

var engine = new CosineSimilarityEngine();

// Automatically uses vectors if available, falls back to simple average if not
var score = engine.CalculateIntentScore(embeddings);

Note: Requires embeddings with vector data. MockEmbeddingProvider automatically generates vectors for testing.

CompositeSimilarityEngine

Combines multiple similarity engines using weighted combination. Useful for A/B testing or combining different strategies.

var engine1 = new SimpleAverageSimilarityEngine();
var engine2 = new WeightedAverageSimilarityEngine(weights);
var engine3 = new CosineSimilarityEngine();

// Equal weights
var composite = new CompositeSimilarityEngine(new[] { engine1, engine2, engine3 });

// Custom weights
var compositeWeighted = new CompositeSimilarityEngine(new[]
{
    (engine1, 1.0),
    (engine2, 2.0),
    (engine3, 1.5)
});

Behavior vector normalization

You can normalize behavior vectors so repeated events do not dominate (e.g. cap per dimension, L1 norm, or soft cap).

ToVectorOptionsNormalization (None, Cap, L1, SoftCap) and optional CapPerDimension.

// Raw (default): actor:action → count
var raw = space.ToVector();

// Cap each dimension at 3
var capped = space.ToVector(new ToVectorOptions(VectorNormalization.Cap, CapPerDimension: 3));

// L1 norm: scale so sum of dimension values = 1
var l1 = space.ToVector(new ToVectorOptions(VectorNormalization.L1));

// SoftCap: value / cap, min 1
var soft = space.ToVector(new ToVectorOptions(VectorNormalization.SoftCap, CapPerDimension: 3));

Time-windowed vector with normalization:

var windowed = space.ToVector(start, end, new ToVectorOptions(VectorNormalization.L1));

See examples/vector-normalization for a runnable example.

LlmIntentModel with ToVectorOptions: Use the extension model.Infer(space, toVectorOptions) (e.g. new ToVectorOptions(CapPerDimension: 20)) to build the vector with a cap and infer in one call — fewer dimensions mean fewer embedding calls and less memory. See Benchmarks — Improvement opportunities.


Rule-based and chained intent models

RuleBasedIntentModel — Infers intent from rules only (no LLM). Fast, deterministic, explainable. First matching rule wins; each rule returns a RuleMatch (name, score, optional reasoning).

ChainedIntentModel — Tries a primary model (e.g. RuleBasedIntentModel) first; if confidence is below a threshold, falls back to a secondary model (e.g. LlmIntentModel). Reduces cost and latency by using the cheap path when confidence is high.

var rules = new List<Func<BehaviorSpace, RuleMatch?>>
{
    space =>
    {
        var loginFails = space.Events.Count(e => e.Action == "login.failed");
        var hasReset = space.Events.Any(e => e.Action == "password.reset");
        if (loginFails >= 2 && hasReset)
            return new RuleMatch("AccountRecovery", 0.85, "login.failed>=2 and password.reset");
        return null;
    }
};

var primary = new RuleBasedIntentModel(rules);
var fallback = new LlmIntentModel(embeddingProvider, new SimpleAverageSimilarityEngine());
var chained = new ChainedIntentModel(primary, fallback, confidenceThreshold: 0.7);

var intent = chained.Infer(space);
// intent.Reasoning: "Primary: login.failed>=2 and password.reset" or "Fallback: LLM (primary confidence below 0.7)"

See examples/chained-intent for a runnable example.


Fluent API

BehaviorSpaceBuilder

Create behavior spaces with a more readable fluent API.

var space = new BehaviorSpaceBuilder()
    .WithActor("user")
        .Action("login")
        .Action("retry")
        .Action("submit")
    .WithActor("system")
        .Action("validate")
    .Build();

With timestamps and metadata:

var space = new BehaviorSpaceBuilder()
    .WithActor("user")
        .Action("login", DateTimeOffset.UtcNow)
        .Action("submit", DateTimeOffset.UtcNow, new Dictionary<string, object> { { "sessionId", "abc123" } })
    .Build();

IntentPolicyBuilder

Create policies with a fluent API.

var policy = new IntentPolicyBuilder()
    .Block("ExcessiveRetry", i => i.Signals.Count(s => s.Description.Contains("retry")) >= 3)
    .Escalate("LowConfidence", i => i.Confidence.Level == "Low")
    .RequireAuth("SensitiveAction", i => i.Signals.Any(s => s.Description.Contains("sensitive")))
    .RateLimit("HighFrequency", i => i.Signals.Count > 10)
    .Allow("HighConfidence", i => i.Confidence.Level is "High" or "Certain")
    .Observe("MediumConfidence", i => i.Confidence.Level == "Medium")
    .Warn("LowConfidence", i => i.Confidence.Level == "Low")
    .Build();

Policy Decision Types

Intentum supports multiple policy decision types:

  • Allow — Allow the action to proceed
  • Observe — Observe the action but allow it to proceed
  • Warn — Warn about the action but allow it to proceed
  • Block — Block the action
  • Escalate — Escalate to a higher level for review
  • RequireAuth — Require additional authentication before proceeding
  • RateLimit — Apply rate limiting to the action

All decision types support localization:

var localizer = new DefaultLocalizer("tr");
var text = PolicyDecision.Escalate.ToLocalizedString(localizer); // "Yükselt"

Policy composition

Combine or extend policies without duplicating rules.

Inheritance (WithBase)

Evaluate a derived policy after a base policy: base rules first, then derived. First matching rule wins.

var basePolicy = new IntentPolicyBuilder()
    .Block("BaseBlock", i => i.Confidence.Level == "Low")
    .Build();
var derived = new IntentPolicyBuilder()
    .Allow("DerivedAllow", i => i.Confidence.Level == "High")
    .Build();
var composed = derived.WithBase(basePolicy);
var decision = intent.Decide(composed);

Merge (multiple policies)

Combine several policies into one; rules from the first policy are evaluated first, then the second, etc.

var merged = IntentPolicy.Merge(policyA, policyB, policyC);
var decision = intent.Decide(merged);

A/B policy variants (PolicyVariantSet)

Use different policies per intent (e.g. by experiment or confidence). The selector returns which variant name to use.

var variants = new PolicyVariantSet(
    new Dictionary<string, IntentPolicy> { ["control"] = controlPolicy, ["treatment"] = treatmentPolicy },
    intent => intent.Confidence.Score > 0.8 ? "treatment" : "control");
var decision = intent.Decide(variants);

Rate Limiting

When policy returns RateLimit, use IRateLimiter to enforce limits. MemoryRateLimiter provides in-memory fixed-window limiting (single-node or development).

Basic usage

var rateLimiter = new MemoryRateLimiter();

// After intent.Decide(policy) returns RateLimit:
var result = await rateLimiter.TryAcquireAsync(
    key: "user-123",
    limit: 100,
    window: TimeSpan.FromMinutes(1));

if (!result.Allowed)
{
    // Return 429 with Retry-After: result.RetryAfter
    return Results.Json(new { error = "Rate limit exceeded" }, statusCode: 429);
}

With policy (DecideWithRateLimitAsync)

var options = new RateLimitOptions("user-123", 100, TimeSpan.FromMinutes(1));
var (decision, rateLimitResult) = await intent.DecideWithRateLimitAsync(
    policy,
    rateLimiter,
    options);

if (decision == PolicyDecision.RateLimit && rateLimitResult is { Allowed: false })
{
    // Enforce: return 429, set Retry-After header from rateLimitResult.RetryAfter
}

Reset

rateLimiter.Reset("user-123"); // Clear counter (e.g. after admin override)

Note: MemoryRateLimiter is per-process. For multi-node apps, use a distributed rate limiter implementing IRateLimiter.


Embedding Caching

Cache embedding results to improve performance and reduce API calls.

Memory Cache

var memoryCache = new MemoryCache(new MemoryCacheOptions());
var cache = new MemoryEmbeddingCache(memoryCache, new MemoryCacheEntryOptions
{
    AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1)
});

var cachedProvider = new CachedEmbeddingProvider(
    new MockEmbeddingProvider(),
    cache);

var model = new LlmIntentModel(cachedProvider, new SimpleAverageSimilarityEngine());

Redis (Distributed) Cache

What it is: Intentum.AI.Caching.Redis implements IEmbeddingCache using Redis (via IDistributedCache). Embedding results (behavior key → vector/score) are stored in Redis so that multiple app instances share the same cache.

What it's for: Use it when you run more than one node (e.g. multiple web servers or workers). In-memory cache (MemoryEmbeddingCache) is per-process; Redis cache is shared, so you avoid duplicate embedding API calls and reduce cost and latency. Typical use: production with OpenAI/Gemini/Mistral where the same behavior keys are requested from different instances.

How to use it:

  1. Add the package: Intentum.AI.Caching.Redis.
  2. Ensure a Redis server is available (local or managed, e.g. Azure Cache for Redis).
  3. Register the cache and wrap your embedding provider with CachedEmbeddingProvider:
builder.Services.AddIntentumRedisCache(options =>
{
    options.ConnectionString = "localhost:6379";  // or your Redis connection string
    options.InstanceName = "Intentum:";            // key prefix (default)
    options.DefaultExpiration = TimeSpan.FromHours(24);
});
builder.Services.AddSingleton<IIntentEmbeddingProvider>(sp =>
{
    var provider = new OpenAIEmbeddingProvider(/* ... */);  // or any IIntentEmbeddingProvider
    var cache = sp.GetRequiredService<IEmbeddingCache>();
    return new CachedEmbeddingProvider(provider, cache);
});
  1. Inject IIntentEmbeddingProvider and use it with LlmIntentModel as usual. Cache hits are served from Redis; misses call the underlying provider and then store the result.

Options: ConnectionString, InstanceName (key prefix), DefaultExpiration (TTL for cached embeddings).


Behavior Space Metadata and Time Windows

Metadata

Associate metadata with behavior spaces:

var space = new BehaviorSpace();
space.SetMetadata("sector", "ESG");
space.SetMetadata("sessionId", "abc123");
space.SetMetadata("userId", "user456");

var sector = space.GetMetadata<string>("sector");

Time Windows

Analyze events within specific time windows:

// Get events in the last hour
var recentEvents = space.GetEventsInWindow(TimeSpan.FromHours(1));

// Get events in a specific range
var eventsInRange = space.GetEventsInWindow(startTime, endTime);

// Get time span of all events
var span = space.GetTimeSpan();

// Build vector from time window
var vector = space.ToVector(startTime, endTime);

Testing Utilities

The Intentum.Testing package provides helpers for writing tests.

Test Helpers

var model = TestHelpers.CreateDefaultModel();
var policy = TestHelpers.CreateDefaultPolicy();
var space = TestHelpers.CreateSimpleSpace();
var spaceWithRetries = TestHelpers.CreateSpaceWithRetries(3);

Assertions

// BehaviorSpace assertions
BehaviorSpaceAssertions.ContainsEvent(space, "user", "login");
BehaviorSpaceAssertions.HasEventCount(space, 5);
BehaviorSpaceAssertions.ContainsActor(space, "user");

// Intent assertions
IntentAssertions.HasConfidenceLevel(intent, "High");
IntentAssertions.HasConfidenceScore(intent, 0.7, 1.0);
IntentAssertions.HasSignals(intent);
IntentAssertions.ContainsSignal(intent, "retry");

// Policy decision assertions
PolicyDecisionAssertions.IsOneOf(decision, PolicyDecision.Allow, PolicyDecision.Observe);
PolicyDecisionAssertions.IsAllow(decision);
PolicyDecisionAssertions.IsNotBlock(decision);

Intent Analytics & Reporting

The Intentum.Analytics package provides analytics and reporting over intent history (requires IIntentHistoryRepository).

Setup

// After adding Intentum.Persistence (e.g. EF Core) and Intentum.Analytics
builder.Services.AddIntentumPersistence(options => options.UseSqlServer(connectionString));
builder.Services.AddIntentAnalytics();
var analytics = serviceProvider.GetRequiredService<IIntentAnalytics>();

var trends = await analytics.GetConfidenceTrendsAsync(
    start: DateTimeOffset.UtcNow.AddDays(-30),
    end: DateTimeOffset.UtcNow,
    bucketSize: TimeSpan.FromDays(1));

foreach (var point in trends)
    Console.WriteLine($"{point.BucketStart:yyyy-MM-dd} {point.ConfidenceLevel}: {point.Count} (avg score {point.AverageScore:F2})");

Decision distribution

var report = await analytics.GetDecisionDistributionAsync(start, end);
Console.WriteLine($"Total: {report.TotalCount}");
foreach (var (decision, count) in report.CountByDecision)
    Console.WriteLine($"  {decision}: {count}");

Anomaly detection

var anomalies = await analytics.DetectAnomaliesAsync(start, end, TimeSpan.FromHours(1));
foreach (var a in anomalies)
    Console.WriteLine($"{a.Type}: {a.Description} (severity {a.Severity:F2})");

Dashboard summary

var summary = await analytics.GetSummaryAsync(start, end, TimeSpan.FromDays(1));
// summary.TotalInferences, summary.UniqueBehaviorSpaces, summary.ConfidenceTrend,
// summary.DecisionDistribution, summary.Anomalies

Export to JSON / CSV

var json = await analytics.ExportToJsonAsync(start, end);
var csv = await analytics.ExportToCsvAsync(start, end);

ASP.NET Core Middleware

Automatically observe HTTP request behaviors.

Setup

// Program.cs
builder.Services.AddIntentum();
// or with custom BehaviorSpace
builder.Services.AddIntentum(customBehaviorSpace);

app.UseIntentumBehaviorObservation(new BehaviorObservationOptions
{
    Enabled = true,
    IncludeHeaders = false,
    GetActor = ctx => "http",
    GetAction = ctx => $"{ctx.Request.Method.ToLowerInvariant()}_{ctx.Request.Path.Value?.Replace("/", "_")}"
});

Observability

The Intentum.Observability package provides OpenTelemetry metrics.

Setup

var model = new ObservableIntentModel(
    new LlmIntentModel(embeddingProvider, similarityEngine));

// Metrics are automatically recorded
var intent = model.Infer(space);

// Or use extension method for policy decisions
var decision = intent.DecideWithMetrics(policy);

Metrics

  • intentum.intent.inference.count — Number of intent inferences
  • intentum.intent.inference.duration — Duration of inference operations (ms)
  • intentum.intent.confidence.score — Confidence scores
  • intentum.policy.decision.count — Number of policy decisions
  • intentum.behavior.space.size — Size of behavior spaces

Batch Processing

Process multiple behavior spaces efficiently in batch.

BatchIntentModel

var model = new LlmIntentModel(embeddingProvider, similarityEngine);
var batchModel = new BatchIntentModel(model);

// Synchronous batch processing
var spaces = new[] { space1, space2, space3 };
var intents = batchModel.InferBatch(spaces);

// Async batch processing with cancellation support
var intentsAsync = await batchModel.InferBatchAsync(spaces, cancellationToken);

Streaming: InferMany / InferManyAsync

For lazy or async streaming over many behavior spaces, use the Intentum.AI extension methods (no batch list in memory):

using Intentum.AI;

// Lazy sync stream: yields one intent per space as you enumerate
foreach (var intent in model.InferMany(spaces))
    Process(intent);

// Async stream: yields intents as spaces are enumerated (e.g. from DB or queue)
await foreach (var intent in model.InferManyAsync(SpacesFromDbAsync(), cancellationToken))
    await ProcessAsync(intent);

Behavior vector caching and pre-computed vector

  • ToVector() cache: BehaviorSpace.ToVector() is computed once and cached until you call Observe() again, so repeated inference on the same space reuses the vector.
  • Pre-computed vector in Infer: When you already have a BehaviorVector (e.g. from persistence or a snapshot), pass it to avoid recomputation: model.Infer(space, precomputedVector).

Persistence

Store behavior spaces and intent history for analytics and auditing. Implementations: Entity Framework Core, Redis, MongoDB.

Entity Framework Core

// Setup
builder.Services.AddIntentumPersistence(options =>
    options.UseSqlServer(connectionString));

// Or use in-memory for testing
builder.Services.AddIntentumPersistenceInMemory("TestDb");

// Usage
var repository = serviceProvider.GetRequiredService<IBehaviorSpaceRepository>();
var id = await repository.SaveAsync(behaviorSpace);
var retrieved = await repository.GetByIdAsync(id);

// Query by metadata
var spaces = await repository.GetByMetadataAsync("sector", "ESG");

// Query by time window
var recentSpaces = await repository.GetByTimeWindowAsync(
    DateTimeOffset.UtcNow.AddHours(-24),
    DateTimeOffset.UtcNow);

Intent History

var historyRepository = serviceProvider.GetRequiredService<IIntentHistoryRepository>();

// Save intent result
await historyRepository.SaveAsync(behaviorSpaceId, intent, decision);

// Query history
var history = await historyRepository.GetByBehaviorSpaceIdAsync(behaviorSpaceId);
var highConfidence = await historyRepository.GetByConfidenceLevelAsync("High");
var blocked = await historyRepository.GetByDecisionAsync(PolicyDecision.Block);

Redis

Add Intentum.Persistence.Redis and register with a Redis connection:

using Intentum.Persistence.Redis;
using StackExchange.Redis;

var redis = ConnectionMultiplexer.Connect("localhost");
builder.Services.AddIntentumPersistenceRedis(redis, keyPrefix: "intentum:");

MongoDB

Add Intentum.Persistence.MongoDB and register with an IMongoDatabase:

using Intentum.Persistence.MongoDB;
using MongoDB.Driver;

var client = new MongoClient(connectionString);
var database = client.GetDatabase("intentum");
builder.Services.AddIntentumPersistenceMongoDB(database,
    behaviorSpaceCollectionName: "behaviorspaces",
    intentHistoryCollectionName: "intenthistory");

Webhook / Event System

What it is: Intentum.Events lets you send intent-related events (e.g. after inference or policy decision) to external systems via HTTP POST. It provides IIntentEventHandler and a built-in WebhookIntentEventHandler that posts a JSON payload to one or more webhook URLs with configurable retry.

What it's for: Use it when you need to notify another service when an intent is inferred or a decision is made (e.g. analytics, audit, downstream workflows). Typical use: after intent = model.Infer(space) and decision = intent.Decide(policy), call HandleAsync(payload, IntentumEventType.IntentInferred) so your webhook endpoint receives the intent name, confidence, decision, and timestamp.

How to use it:

  1. Add the package: Intentum.Events.
  2. Register events and webhooks:
builder.Services.AddIntentumEvents(options =>
{
    options.AddWebhook("https://api.example.com/webhooks/intent", events: new[] { "IntentInferred", "PolicyDecisionChanged" });
    options.RetryCount = 3;  // retries on HTTP failure (exponential backoff)
});
  1. After inference, build a payload and call the handler:
var intent = model.Infer(space);
var decision = intent.Decide(policy);
var payload = new IntentEventPayload(behaviorSpaceId: "id", intent, decision, DateTimeOffset.UtcNow);
await eventHandler.HandleAsync(payload, IntentumEventType.IntentInferred, cancellationToken);

The webhook receives a POST with a JSON body containing BehaviorSpaceId, IntentName, ConfidenceLevel, ConfidenceScore, Decision, RecordedAt, EventType. Failed POSTs are retried according to RetryCount.


Intent Clustering

What it is: Intentum.Clustering groups intent history records for analysis. It provides IIntentClusterer with two strategies: ClusterByPatternAsync (group by confidence level + policy decision) and ClusterByConfidenceScoreAsync (split into k score buckets). Each cluster has an id, label, record ids, count, and a summary (average/min/max score).

What it's for: Use it when you store intent history (e.g. via IIntentHistoryRepository) and want to see patterns: how many intents ended in Allow vs Block, or how confidence is distributed (low/medium/high bands). Typical use: analytics dashboards, anomaly detection, or tuning policy thresholds.

How to use it:

  1. Add the package: Intentum.Clustering. You need Intentum.Persistence (and a repository that implements IIntentHistoryRepository) so you have IntentHistoryRecord data.
  2. Register the clusterer:
builder.Services.AddIntentClustering();
  1. Resolve IIntentClusterer and your IIntentHistoryRepository. Fetch records (e.g. by time window), then cluster:
var clusterer = serviceProvider.GetRequiredService<IIntentClusterer>();
var historyRepo = serviceProvider.GetRequiredService<IIntentHistoryRepository>();
var records = await historyRepo.GetByTimeWindowAsync(start, end);

// Group by (ConfidenceLevel, Decision) — e.g. "High / Allow", "Medium / Observe"
var patternClusters = await clusterer.ClusterByPatternAsync(records);
foreach (var c in patternClusters)
    Console.WriteLine($"{c.Label}: {c.Count} intents (avg score {c.Summary?.AverageConfidenceScore:F2})");

// Split into k buckets by confidence score (e.g. low / medium / high)
var scoreClusters = await clusterer.ClusterByConfidenceScoreAsync(records, k: 3);
foreach (var c in scoreClusters)
    Console.WriteLine($"{c.Label}: {c.Count} intents");

Intent Explainability

What it is: Intentum.Explainability explains how an intent was inferred: which signals (behaviors) contributed how much to the final confidence. It provides IIntentExplainer and IntentExplainer, which compute signal contribution (each signal’s weight as a percentage of the total) and a human-readable explanation string.

What it's for: Use it when you need to show users or auditors why a given intent/confidence was returned (e.g. “login and submit contributed 60% and 40%”). Typical use: debug UI, compliance, or support tools.

How to use it:

  1. Add the package: Intentum.Explainability.
  2. Create an explainer (or register it in DI) and call it after inference:
var explainer = new IntentExplainer();

// Per-signal contribution (source, description, weight, percentage)
var contributions = explainer.GetSignalContributions(intent);
foreach (var c in contributions)
    Console.WriteLine($"{c.Description}: {c.ContributionPercent:F0}%");

// Single summary sentence
var text = explainer.GetExplanation(intent, maxSignals: 5);
// e.g. "Intent \"AI-Inferred-Intent\" inferred with confidence High (0.85). Top contributors: user:login (45%); user:submit (35%); ..."

No extra configuration; it works from the Intent and its Signals and Confidence.


Intent Simulation

What it is: Intentum.Simulation generates synthetic behavior spaces for testing and demos. It provides IBehaviorSpaceSimulator and BehaviorSpaceSimulator with two methods: FromSequence (build a space from a fixed list of actor/action pairs) and GenerateRandom (build a space with random events from given actors and actions, with an optional seed for reproducibility).

What it's for: Use it when you need many behavior spaces in tests without hand-writing each BehaviorSpace (e.g. load tests, property-based tests, or demos). Typical use: unit tests that feed spaces into model.Infer(space) or experiment.RunAsync(spaces).

How to use it:

  1. Add the package: Intentum.Simulation.
  2. Create a simulator and generate spaces:
var simulator = new BehaviorSpaceSimulator();

// Fixed sequence — events get timestamps 1 second apart (or pass baseTime)
var space = simulator.FromSequence(new[] { ("user", "login"), ("user", "submit") });

// Random space — useful for stress tests or demos; use randomSeed for reproducible tests
var randomSpace = simulator.GenerateRandom(
    actors: new[] { "user", "system" },
    actions: new[] { "login", "submit", "retry", "cancel" },
    eventCount: 10,
    randomSeed: 42);
  1. Use the returned BehaviorSpace with your IIntentModel, policy, or IntentExperiment as usual.

A/B Experiments

What it is: Intentum.Experiments runs A/B tests over intent inference: you define multiple variants (each is a model + policy pair), set a traffic split (e.g. 50% control, 50% test), and run a list of behavior spaces through the experiment. Each space is assigned to one variant by the split; you get back one ExperimentResult per space (variant name, intent, decision).

What it's for: Use it when you want to compare two (or more) models or policies on the same traffic (e.g. “new policy vs current”). Typical use: rolling out a new policy or model and measuring Allow/Block/Observe distribution per variant.

How to use it:

  1. Add the package: Intentum.Experiments.
  2. Build an experiment with at least two variants and a traffic split (percentages must sum to 100):
var experiment = new IntentExperiment()
    .AddVariant("control", controlModel, controlPolicy)
    .AddVariant("test", testModel, testPolicy)
    .SplitTraffic(50, 50);  // 50% control, 50% test; if omitted, split is even
  1. Run the experiment with a list of behavior spaces (e.g. from production sampling or simulation):
var results = await experiment.RunAsync(behaviorSpaces, cancellationToken);
foreach (var r in results)
    Console.WriteLine($"{r.VariantName}: {r.Intent.Confidence.Level} → {r.Decision}");

You can then aggregate by VariantName to compare metrics (e.g. Block rate, average confidence) between control and test.


Multi-tenancy

What it is: Intentum.MultiTenancy provides a tenant-scoped behavior space repository. TenantAwareBehaviorSpaceRepository wraps any IBehaviorSpaceRepository: on save it injects the current tenant id (from ITenantProvider) into the behavior space metadata; on read/delete it returns only data belonging to the current tenant.

What it's for: Use it when your app serves multiple tenants (e.g. organizations or customers) and you must isolate behavior spaces and intent history per tenant. Typical use: SaaS backends where each request has a tenant context (e.g. from HTTP header or claims).

How to use it:

  1. Add the package: Intentum.MultiTenancy.
  2. Implement ITenantProvider to return the current tenant id (e.g. from IHttpContextAccessor, claims, or ambient context). Register it and the tenant-aware repository:
builder.Services.AddSingleton<ITenantProvider, MyTenantProvider>();  // your implementation
builder.Services.AddTenantAwareBehaviorSpaceRepository();
  1. Register an inner IBehaviorSpaceRepository (e.g. EF or MongoDB) as usual. When you need tenant isolation, inject TenantAwareBehaviorSpaceRepository instead of IBehaviorSpaceRepository:
// In a request with tenant context (e.g. middleware sets tenant)
var repo = serviceProvider.GetRequiredService<TenantAwareBehaviorSpaceRepository>();
await repo.SaveAsync(space, cancellationToken);   // space gets metadata "TenantId" = current tenant
var list = await repo.GetByTimeWindowAsync(start, end, cancellationToken);  // only current tenant's spaces

Tenant id is stored in metadata under the key TenantId. If GetCurrentTenantId() returns null or empty, the wrapper does not filter (all data is visible).


Policy Versioning

What it is: Intentum.Versioning tracks policy versions so you can roll back or roll forward. It provides IVersionedPolicy (a policy plus a version string), VersionedPolicy (record implementation), and PolicyVersionTracker, which holds a list of versioned policies and a “current” index. You can add versions, switch current, and call Rollback() / Rollforward() to move the current index.

What it's for: Use it when you deploy policy changes and want to quickly revert to a previous version without redeploying (e.g. a new rule causes too many Blocks). Typical use: admin API or feature flag that switches the active policy version.

How to use it:

  1. Add the package: Intentum.Versioning.
  2. Wrap policies with VersionedPolicy and add them to a tracker (e.g. in DI as singleton):
var tracker = new PolicyVersionTracker();
tracker.Add(new VersionedPolicy("1.0", policyV1));
tracker.Add(new VersionedPolicy("2.0", policyV2));  // current is now 2.0
  1. Use the tracker’s current policy when deciding:
var versioned = tracker.Current;
var policy = versioned?.Policy ?? fallbackPolicy;
var decision = intent.Decide(policy);
  1. Roll back or forward when needed:
if (tracker.Rollback())   // current moves to previous (e.g. 2.0 → 1.0)
    logger.LogInformation("Rolled back to {Version}", tracker.Current?.Version);
if (tracker.Rollforward())  // current moves to next (e.g. 1.0 → 2.0)
    logger.LogInformation("Rolled forward to {Version}", tracker.Current?.Version);

You can also SetCurrent(index) to jump to a specific version by index. Version strings are arbitrary (e.g. "1.0", "2024-01-15"); CompareVersions(a, b) is provided for ordering.


Intent Timeline

What it is: Intentum.Analytics extends intent history with entity-scoped timeline: IIntentHistoryRepository.GetByEntityIdAsync(entityId, start, end) and IIntentAnalytics.GetIntentTimelineAsync(entityId, start, end) return time-ordered points (intent name, confidence, decision) for a given entity.

What it's for: When you attach an optional EntityId to intent history records, you can answer "how did this user/session's intent evolve over time?" — useful for dashboards, support tools, or auditing.

How to use: Persist intent history with EntityId (e.g. via your repository implementation). Register IIntentAnalytics with AddIntentAnalytics(). Call GetIntentTimelineAsync(entityId, start, end). Sample Web: GET /api/intent/analytics/timeline/{entityId}.


Intent Tree

What it is: Intentum.Explainability provides IIntentTreeExplainer: given an inferred intent and a policy, it builds a decision tree (which rule matched, signal nodes, intent summary). Use ExplainTree(intent, policy, behaviorSpace?) to get IntentDecisionTree.

What it's for: Explain why a policy returned Allow/Block: show the path (rule name, condition, signals) in a tree form for UI or audit.

How to use: Add Intentum.Explainability, register with AddIntentTreeExplainer(). After inference and policy evaluation, call treeExplainer.ExplainTree(intent, policy, space). Sample Web: POST /api/intent/explain-tree (same body as infer).


Context-Aware Policy Engine

What it is: ContextAwarePolicyEngine and ContextAwareIntentPolicy evaluate rules with a PolicyContext (intent, system load, region, recent intents, custom key-value). Rules are Func<Intent, PolicyContext, bool>.

What it's for: Decisions that depend on more than the current intent (e.g. "block if load > 0.8" or "escalate if same intent repeated 3 times").

How to use: Build a ContextAwareIntentPolicy with context-aware rules. Create ContextAwarePolicyEngine and call Evaluate(intent, context, policy). Extension: intent.Decide(context, policy) (RuntimeExtensions).


Policy Store

What it is: Intentum.Runtime.PolicyStore provides IPolicyStore (e.g. FilePolicyStore) to load declarative policies from JSON: rules with conditions expressed as property/operator/value (e.g. intent.confidence.level eq "High"). SafeConditionBuilder turns these into Func<Intent, bool>. Supports hot-reload from file.

What it's for: Personnel who are not developers (for example, operations or compliance teams) can edit policy rules in the JSON file; no code change or deploy is required for rule changes.

How to use: Add Intentum.Runtime.PolicyStore, register with AddFilePolicyStore(path). Load policy with await policyStore.LoadAsync(). See repository for JSON schema (PolicyDocument, PolicyRuleDocument, PolicyConditionDocument).


Behavior Pattern Detector

What it is: Intentum.Analytics provides IBehaviorPatternDetector: analyze intent history for behavior patterns, pattern anomalies, and template matching (GetBehaviorPatternsAsync, GetPatternAnomaliesAsync, MatchTemplates).

What it's for: Discover recurring intent clusters and flag anomalies (e.g. sudden spike in Block rate or unusual confidence distribution).

How to use: Register with AddBehaviorPatternDetector(). Inject IBehaviorPatternDetector and call the methods with a time window and optional template intents.


Multi-Stage Intent Model

What it is: MultiStageIntentModel (in Intentum.Core) chains multiple IIntentModel instances with confidence thresholds. The first model that returns confidence above its threshold wins; otherwise the last model's result is returned.

What it's for: E.g. rule-based → lightweight LLM → heavy LLM, so you only pay for expensive inference when confidence is low.

How to use: Create stages (model + threshold). new MultiStageIntentModel(stages). Call Infer(space) as usual.


Scenario Runner

What it is: Intentum.Simulation provides IScenarioRunner (IntentScenarioRunner): run a list of BehaviorScenario (fixed sequences or random-generation params) through an intent model and policy. Returns ScenarioRunResult per scenario.

What it's for: Reproducible scenario testing, demos, or regression suites (e.g. "run 10 scenarios, assert no unexpected Blocks").

How to use: Add Intentum.Simulation, register with AddIntentScenarioRunner(). Build scenarios (e.g. from sequences or random config) and call runner.RunAsync(scenarios, model, policy).


Real-time Intent Stream Processing

What it is: Intentum.Core.Streaming defines IBehaviorStreamConsumer with ReadAllAsync() returning IAsyncEnumerable<BehaviorEventBatch>. MemoryBehaviorStreamConsumer is an in-memory implementation using System.Threading.Channels for tests or single-process pipelines.

What it's for: Process behavior events as a stream (e.g. from a message queue or event hub) and infer intent per batch without loading all events into memory.

How to use: Implement or use MemoryBehaviorStreamConsumer. In a worker or Azure Function, await foreach (var batch in consumer.ReadAllAsync(cancellationToken)) and run your model/policy on each batch.


See also

Next step: When you're done with this page → What these features do or API Reference.