Silky Microservice FrameworkSilky Microservice Framework
Home
Docs
Config
Source
github
gitee
  • 简体中文
  • English
Home
Docs
Config
Source
github
gitee
  • 简体中文
  • English
  • Startup

    • Silky Framework Source Code Analysis
    • Host Construction
    • Service Engine
    • Module System
    • Service & Service Entry Resolution
    • Service Registration
    • Dependency Injection Conventions
    • RPC Service Proxy
  • Runtime

    • Endpoints & Routing
    • Executor Dispatch System
    • Local Executor & Server-Side Filters
    • Remote Executor & RPC Call Chain
    • RPC Server Message Handling
    • Service Governance
    • Cache Interceptor
    • Distributed Transactions (TCC)
    • HTTP Gateway Pipeline
    • Filter Pipeline
    • Polly Resilience Pipeline
    • Endpoint Health Monitor

Overview

Silky provides distributed transaction support based on the TCC (Try-Confirm-Cancel) pattern — an application-level distributed transaction solution with no intrusion on underlying resources (database, message queue, etc.):

  • Try: Reserve resources, perform business checks and resource locking
  • Confirm: Commit the operation using the resources reserved in the Try phase
  • Cancel: Roll back, releasing all resources reserved during Try

Compared to two-phase commit (2PC), TCC is more suitable for cross-microservice business consistency scenarios.


Usage

Annotate the Try method with [TccTransaction], specifying the Confirm and Cancel method names:

public interface IAccountAppService
{
    /// <summary>Try phase: deduct balance (reserve)</summary>
    [TccTransaction(ConfirmMethod = "DeductBalanceConfirm", CancelMethod = "DeductBalanceCancel")]
    Task<bool> DeductBalance(DeductBalanceInput input);

    /// <summary>Confirm phase: commit the deduction</summary>
    Task<bool> DeductBalanceConfirm(DeductBalanceInput input);

    /// <summary>Cancel phase: restore the balance (rollback)</summary>
    Task<bool> DeductBalanceCancel(DeductBalanceInput input);
}

Roles & Flow

Starter (Initiator)

The Starter is the TCC transaction entry point — typically the business flow orchestrator (e.g., order service). The Starter:

  1. Calls all participants' Try methods
  2. Triggers global Confirm after all Try calls succeed
  3. Triggers global Cancel if any Try call fails

Participant

A Participant is a microservice called remotely by the Starter (e.g., account service, inventory service). Each participant:

  1. Receives the Try request and reserves resources
  2. Executes Confirm or Cancel based on the Starter's final decision

Full Sequence

OrderService (Starter)               AccountService (Participant)    InventoryService (Participant)
    │                                       │                               │
    │── Try: DeductBalance ─────────────────▶                               │
    │── Try: ReduceStock ───────────────────────────────────────────────────▶
    │                                       │                               │
    │    All Try succeed                    │                               │
    │                                       │                               │
    │── Confirm: DeductBalanceConfirm ──────▶                               │
    │── Confirm: ReduceStockConfirm ────────────────────────────────────────▶
    │
    │    [If any Try fails]
    │
    │── Cancel: DeductBalanceCancel ────────▶
    │── Cancel: ReduceStockCancel ──────────────────────────────────────────▶

Transaction Context Propagation

The TCC transaction context is automatically propagated across microservice boundaries through RpcContext.Attachments:

  • Starter side: Before calling a participant's Try method, the transaction ID and phase are set in RpcContext.Context.SetAttachments()
  • Transport layer: Attachments are serialized into RemoteInvokeMessage.Attachments
  • Participant side: The server-side DefaultServerMessageReceivedHandler restores attachments into the server-side RpcContext

This ensures participants know they are inside a distributed transaction without explicit parameter threading.


StarterTccTransactionHandler

Handles the initiator role:

  1. PreTry: Creates a global transaction record in Redis with status Trying
  2. Try execution: Calls all participant Try methods (RPC calls)
  3. On all success → Confirm: Updates transaction status to Confirming, calls all participant Confirm methods
  4. On any failure → Cancel: Updates transaction status to Canceling, calls all participant Cancel methods

ParticipantTccTransactionHandler

Handles the participant role:

  1. Receives Try / Confirm / Cancel calls
  2. Creates a participant transaction record for each phase
  3. Executes the corresponding business method (Try / Confirm / Cancel)
  4. Updates participant record status after execution

Redis Persistence

Transaction records are persisted to Redis by TccTransactionRepository:

Key format:  silky:transaction:{transactionId}
Value:       Serialized TccTransaction object
TTL:         StoreDays configuration (default 3 days)

Each participant's Try/Confirm/Cancel call also creates a TccParticipant record nested within the transaction.


TccTransactionRecoveryService — Automatic Compensation

A background service that periodically compensates incomplete transactions:

ConfigDefaultDescription
ScheduledInitDelay20sDelay before first recovery run after startup
ScheduledRecoveryDelay30sInterval between recovery runs
RecoverDelayTime259200s (3 days)Transactions older than this are eligible for recovery
RetryMax10Max recovery retry attempts per transaction

Recovery logic:

  1. Query Redis for all transactions in Trying / Confirming / Canceling state
  2. For transactions exceeding RecoverDelayTime: re-invoke the appropriate Confirm or Cancel methods
  3. After RetryMax retries: mark transaction as Failed and alert (log error)

Important Implementation Notes

  • Try method must be idempotent: the recovery service may call it multiple times
  • Confirm and Cancel methods must be idempotent: network failures can cause repeated calls
  • Cancel must always succeed: if Cancel fails, the recovery service will retry; never throw unrecoverable exceptions from Cancel
  • Avoid long-running Try phases: resources are locked during the Try phase; long holds increase contention
Edit this page
Prev
Cache Interceptor
Next
HTTP Gateway Pipeline