Silky Microservice FrameworkSilky Microservice Framework
Home
Docs
Config
Source
github
gitee
  • 简体中文
  • English
Home
Docs
Config
Source
github
gitee
  • 简体中文
  • English
  • Startup

    • Silky Framework Source Code Analysis
    • Host Construction
    • Service Engine
    • Module System
    • Service & Service Entry Resolution
    • Service Registration
    • Dependency Injection Conventions
    • RPC Service Proxy
  • Runtime

    • Endpoints & Routing
    • Executor Dispatch System
    • Local Executor & Server-Side Filters
    • Remote Executor & RPC Call Chain
    • RPC Server Message Handling
    • Service Governance
    • Cache Interceptor
    • Distributed Transactions (TCC)
    • HTTP Gateway Pipeline
    • Filter Pipeline
    • Polly Resilience Pipeline
    • Endpoint Health Monitor

Overview

Silky uses Polly to dynamically build a resilience policy chain for each service entry (ServiceEntry). Policies are cached by serviceEntryId to avoid repeated construction. The framework maintains independent policy pipelines on the client (caller) and server (provider) sides.


Client-Side Policy Pipeline

DefaultInvokePolicyBuilder

Maintains a ConcurrentDictionary<string, IAsyncPolicy<object?>> cache keyed by serviceEntryId:

public IAsyncPolicy<object?> Build(string serviceEntryId)
{
    return _policyCaches.GetOrAdd(serviceEntryId, id =>
    {
        IAsyncPolicy<object?> policy = Policy.NoOpAsync<object?>();

        // Layer 1: Result policies (e.g., overflow retry)
        foreach (var provider in _policyWithResultProviders)
            policy = policy.WrapAsync(provider.Create(id));

        // Layer 2: General policies (e.g., timeout)
        foreach (var provider in _policyProviders)
            policy = policy.WrapAsync(provider.Create(id));

        // Layer 3: Circuit breaker
        foreach (var provider in _circuitBreakerPolicyProviders)
            policy = policy.WrapAsync(provider.Create(id));

        return policy;
    });
}

// Fallback-included build (not cached — Fallback depends on parameters)
public IAsyncPolicy<object?> Build(string serviceEntryId, object[] parameters)
{
    var policy = Build(serviceEntryId); // get cached base policy
    foreach (var provider in _invokeFallbackPolicyProviders)
        policy = policy.WrapAsync(provider.Create(serviceEntryId, parameters));
    return policy;
}

Note: Fallback policy is not cached because it captures the original parameters for the fallback method invocation. It is re-wrapped around the cached base policy on every call.

Policy Layer Details

Timeout Policy (DefaultTimeoutInvokePolicyProvider)

Polly Optimistic timeout (relies on CancellationToken):

Policy.TimeoutAsync(
    TimeSpan.FromMilliseconds(governanceOptions.TimeoutMillSeconds),
    TimeoutStrategy.Optimistic);
  • Not created when TimeoutMillSeconds <= 0
  • Optimistic mode: Does not forcibly abort the Task; cancels the CancellationToken at timeout, following .NET async conventions

Overflow Retry Policy (OverflowServerHandleFailoverPolicyProvider)

Triggers on OverflowMaxServerHandleException (server concurrent limit exceeded):

Policy<object>
    .Handle<OverflowMaxServerHandleException>()
    .WaitAndRetryAsync(
        retryCount: governanceOptions.RetryTimes,
        sleepDurationProvider: _ => TimeSpan.FromMilliseconds(governanceOptions.RetryIntervalMillSeconds),
        onRetry: (outcome, timeSpan, retryNumber, context) => { /* log */ });

Each retry selects a different endpoint (load balancer picks a new one) to avoid routing to the same overloaded instance.

Communication Failover Retry Policy (CommunicationFailoverPolicyProvider)

Triggers on transport-level exceptions (network errors, connection refused, etc.):

Policy<object>
    .Handle<CommunicationException>()
    .WaitAndRetryAsync(
        retryCount: governanceOptions.RetryTimes,
        sleepDurationProvider: _ => TimeSpan.FromMilliseconds(governanceOptions.RetryIntervalMillSeconds));

Circuit Breaker Policy (CircuitBreakerPolicyProvider)

Tracks consecutive non-business exceptions:

Policy<object>
    .Handle<Exception>(ex => !ex.IsBusinessException())
    .CircuitBreakerAsync(
        handledEventsAllowedBeforeBreaking: governanceOptions.ExceptionsAllowedBeforeBreaking,
        durationOfBreak: TimeSpan.FromSeconds(governanceOptions.BreakerSeconds),
        onBreak: (outcome, breakDelay) => { /* circuit opened */ },
        onReset: () => { /* circuit closed */ },
        onHalfOpen: () => { /* one test call allowed */ });

Fallback Policy (InvokeFallbackPolicyProvider)

Wraps all other policies. Invoked when the circuit is open or all retries fail:

Policy<object>
    .Handle<Exception>()
    .FallbackAsync(
        fallbackAction: async (ctx, ct) =>
        {
            // Call the [Fallback]-annotated implementation
            return await fallbackProvider.InvokeAsync(parameters);
        },
        onFallbackAsync: async (outcome, ctx) =>
        {
            // Log the original exception
        });

Composed Policy Execution Order

When all layers are active, the outer-to-inner execution order is:

Fallback (outermost)
    └── CircuitBreaker
        └── CommunicationFailover Retry
            └── OverflowMaxServer Retry
                └── Timeout (innermost)
                    └── Actual RPC call

On timeout: Timeout cancels → Retry sees TaskCanceledException → depending on configuration, retry may reattempt → CircuitBreaker counts failure → if threshold met, circuit opens → Fallback catches BrokenCircuitException


Server-Side Policy Pipeline

DefaultServerHandlePolicyBuilder

Builds the server-side policy for local execution:

// Server-side: primarily MaxConcurrentHandling guard
Policy
    .BulkheadAsync(
        maxParallelization: governanceOptions.MaxConcurrentHandlingCount,
        maxQueuingActions: 0,
        onBulkheadRejectedAsync: async (ctx) =>
        {
            throw new OverflowMaxServerHandleException(serviceEntryId);
        });
  • MaxConcurrentHandlingCount = 0: No limit (no bulkhead policy created)
  • On rejection: Throws OverflowMaxServerHandleException, which the client-side overflow retry policy catches and retries on a different endpoint

Policy Cache Lifetime

Client-side policies (excluding Fallback) are cached for the application lifetime in ConcurrentDictionary. Since GovernanceOptions are static (determined at startup from config + attributes), cached policies remain valid as long as the application runs.

If GovernanceOptions need to change at runtime (dynamic governance), call _invokePolicyBuilder.ClearCache(serviceEntryId) to force policy rebuild on next invocation.

Edit this page
Prev
Filter Pipeline
Next
Endpoint Health Monitor