Silky Microservice FrameworkSilky Microservice Framework
Home
Docs
Config
Source
github
gitee
  • 简体中文
  • English
Home
Docs
Config
Source
github
gitee
  • 简体中文
  • English
  • Startup

    • Silky Framework Source Code Analysis
    • Host Construction
    • Service Engine
    • Module System
    • Service & Service Entry Resolution
    • Service Registration
    • Dependency Injection Conventions
    • RPC Service Proxy
  • Runtime

    • Endpoints & Routing
    • Executor Dispatch System
    • Local Executor & Server-Side Filters
    • Remote Executor & RPC Call Chain
    • RPC Server Message Handling
    • Service Governance
    • Cache Interceptor
    • Distributed Transactions (TCC)
    • HTTP Gateway Pipeline
    • Filter Pipeline
    • Polly Resilience Pipeline
    • Endpoint Health Monitor

Overview

This chapter analyzes the internal workings of the Silky framework from a source code perspective, helping developers with advanced requirements understand how the framework is built and how it runs. Understanding the core design enables better customization, extension development, and troubleshooting.


Startup Phase

The startup phase is the core process of framework initialization, covering:

Host Construction

Completed via HostBuilderExtensions.RegisterSilkyServices<T>():

  • Creation of the service engine (IEngine)
  • Loading of config files (appsettings.yml, appsettings.{env}.yml, ratelimit.json, etc.)
  • Module dependency graph resolution and topological sorting
  • Autofac IoC container configuration and service registration

Service Engine

IEngine is the core dispatcher of the Silky framework, responsible for:

  • Managing module lifecycles (initialization, service configuration, shutdown callbacks)
  • Registering modules and framework services in the Autofac container
  • Executing cleanup logic on application shutdown

Module Resolution & Execution

Silky organizes features into modules. At startup:

  1. Starting from the specified startup module, recursively resolves all direct/indirect module dependencies
  2. Performs topological sort on modules by dependency order
  3. Calls each module's ConfigureServices to register services in order
  4. HTTP-type modules additionally call Configure to set up the middleware pipeline

Service & Service Entry Resolution

Application interface scanning and service entry generation:

  • Scans assemblies for interfaces annotated with [ServiceRoute] and builds service metadata
  • Analyzes each interface method, combining route templates and HTTP verb attributes to generate service entry (ServiceEntry) information
  • Registers service entries in the in-memory IServiceEntryManager

Service Registration

After startup, the service publishes its routing information to the registry:

  • Assembles local IP + RPC port + all service entry metadata into a service route record
  • Uses a distributed lock to prevent concurrent multi-instance writes causing data races
  • Writes the service route info to the registry (Zookeeper / Nacos / Consul)

Dependency Injection Conventions

Silky's convention-based dependency injection mechanism:

  • Three marker interfaces: ITransientDependency / IScopedDependency / ISingletonDependency
  • DefaultDependencyRegistrar scans assemblies and auto-registers — no explicit AddXxx() needed
  • [InjectNamed] attribute supports named injection for multiple implementations of the same interface
  • PropertiesAutowired property injection; EngineContext.Current for manual resolution

RPC Service Proxy

Dynamic proxies are automatically generated at startup for remote service interfaces:

  • ServiceHelper.FindServiceProxyTypes() identifies remote service interfaces without local implementations
  • Castle.DynamicProxy.ProxyGenerator.CreateInterfaceProxyWithoutTarget() creates runtime proxy objects
  • RpcClientProxyInterceptor intercepts method calls and forwards them to IExecutor
  • Supports ServiceKey multi-implementation route pass-through

Runtime Phase

The runtime phase processes actual call requests:

Endpoints & Routing

The runtime routing system maps requests to service entries and dispatches execution:

  • Route table generation: Built statically at startup from service entry information into ASP.NET Core endpoint mappings
  • Route matching: Locates the target service entry from HTTP method + path at runtime
  • Parameter extraction: Parses method arguments from path parameters, query strings, and request body
  • Local/remote dispatch: Selects local executor or RPC remote call based on whether the target service is on the current instance

Executor Dispatch System

IExecutor / DefaultExecutor is the unified entry point for service invocations, deciding whether to route locally or remotely:

  • Local/remote decision: Uses ServiceEntry.Executor delegate to determine if the target is on this instance
  • Local execution path: LocalInvoker → server-side filter pipeline → business method
  • Remote execution path: RemoteInvoker → Polly policies → DotNetty RPC → target instance

Local Executor & Server-Side Filters

DefaultLocalExecutor executes business methods on the local instance:

  • Server-side filter pipeline: IServerFilter / IAsyncServerFilter (Auth / Action / Exception / Result)
  • Filter state machine: ActionBegin → AuthorizationBegin → ActionInside → ResultBegin → InvokeEnd
  • Cache interception: GetCachingIntercept / UpdateCachingIntercept / RemoveCachingIntercept
  • ObjectMethodExecutor: Uniformly handles sync/async/Task/ValueTask return types

Remote Executor & RPC Call Chain

DefaultRemoteExecutor sends call requests to remote instances over DotNetty TCP:

  • Polly policy composition: Timeout → Retry → CircuitBreaker → Fallback (four nested layers)
  • Load balancing: Polling / Random / HashAlgorithm / Appoint strategies
  • Client-side filter pipeline: IClientFilter / IAsyncClientFilter
  • TransportClient: UUID-correlated request/response, TaskCompletionSource async wait pattern

RPC Server Message Handling

DotNetty TCP server listens on the RPC port, receiving and processing incoming call requests from other microservices:

  • Channel Pipeline: TLS → IdleState → LengthFieldFrame → Decoder/Encoder → ServerHandler
  • Message codec: TransportMessage (Id + ContentType + JSON Content)
  • DefaultServerMessageReceivedHandler: Find service entry → parse parameters → execute → build response
  • RpcContext attachment propagation: User identity, tenant ID, TraceId propagated implicitly through the call chain

Service Governance

Silky implements comprehensive reliability via Polly and GovernanceOptions on both client and server sides:

  • GovernanceOptions: Unified management of timeout, retry, circuit breaker, and load balancing parameters
  • Three-tier priority: Method-level attribute > Interface-level attribute > Global config file
  • Circuit breaker: Closed → Open → Half-Open state machine, prevents cascading failures
  • Fallback: [Fallback] attribute configures a fallback method that returns default values on circuit open
  • Concurrency guard: MaxConcurrentHandlingCount limits max concurrent requests per instance

Cache Interceptor

AOP-based transparent caching — annotate attributes and caching is handled automatically:

  • [GetCachingIntercept]: Reads from cache first; executes method only on cache miss
  • [UpdateCachingIntercept]: Updates cache after method execution
  • [RemoveCachingIntercept] / [RemoveMatchKeyCachingIntercept]: Removes/batch-removes cache after method execution
  • KeyTemplate supports parameter name placeholders and [HashKey] property placeholders
  • Supports multi-tenant isolation, user-level isolation, and cache skipping inside distributed transactions

Distributed Transactions (TCC)

TCC-pattern distributed transaction support, non-invasive to underlying resources:

  • [TccTransaction(ConfirmMethod, CancelMethod)] marks the Try method
  • StarterTccTransactionHandler (initiator): PreTry → global Confirm/Cancel
  • ParticipantTccTransactionHandler (participant): Responds to Trying / Confirming / Canceling phases
  • Transaction context auto-propagated across microservices via RpcContext.Attachments
  • Persisted to Redis; TccTransactionRecoveryService periodically compensates incomplete transactions

HTTP Gateway Pipeline

How the gateway receives HTTP requests and routes them to internal RPC calls.

Filter Pipeline

The middleware and filter pipeline architecture for request processing.

Polly Pipeline

The layered resilience pipeline built with Polly policies.

Endpoint Monitor

How service endpoint health is monitored and unhealthy endpoints are managed.


Reading Guide

What you want to understandRecommended section
What does the framework do at startup?Host Construction
How does the module system work?Module Resolution & Execution
How are service entries generated?Service & Service Entry Resolution
How does service registration to the registry work?Service Registration
How does convention-based DI work?Dependency Injection Conventions
How do remote service interfaces become injectable proxies?RPC Service Proxy
How does an HTTP request route to a service entry?Endpoints & Routing
How does a request decide local vs. remote execution?Executor Dispatch System
How do local execution and server-side filters work?Local Executor & Server-Side Filters
What is the complete RPC call chain?Remote Executor & RPC Call Chain
How does the server receive and handle RPC messages?RPC Server Message Handling
How to configure timeout/retry/circuit break/fallback?Service Governance
How to add transparent caching to service methods?Cache Interceptor
How to implement cross-service distributed transactions?Distributed Transactions (TCC)
Edit this page
Next
Host Construction