Caching is a technique used to store frequently accessed data in a temporary storage layer to improve system performance and reduce latency. There are various caching strategies, each with different use cases, benefits, and trade-offs.

Different caching strategies dictate how data is read from and written to the cache and the underlying data store (e.g., a database). Below, are the key caching types—Read-Through, Write-Through, and others like Write-Back, Write-Around, and Cache-Aside.


1. Read-Through Cache

In a read-through cache, the application requests data from the cache. If the data isn’t present (cache miss), the cache itself fetches it from the underlying data store, stores it, and returns it to the application. The application doesn’t directly interact with the data store for reads.

  • How It Works:
    1. Application requests data from the cache.
    2. Cache checks for the data:
      • Hit: Returns data immediately.
      • Miss: Cache queries the data store, updates itself, then returns the data.

Pros

Automatic Cache Population – Ensures that frequently accessed data is available in the cache.

Consistent Read Patterns – Applications always query the cache first, reducing database load.

Improved Performance – Faster data access due to lower latency.

Cons

Higher Latency on Misses – Initial misses cause database queries, slowing performance.

Stale Data Risk – If the database is updated, the cache may serve outdated data until refreshed.

  • Example use cases:
    • User Profiles: Frequently accessed user data in applications like Facebook or Twitter.
    • Product Catalogs: E-commerce applications where product details are cached for quick retrieval.
    • CDN's: Used in content delivery networks (CDNs) like Cloudflare, where edge nodes fetch from origin servers on misses.

2. Write-Through Cache

In a write-through cache, every write operation from the application goes through the cache, which immediately updates both itself and the underlying data store synchronously.

  • How It Works:
    1. Application writes data to the cache.
    2. Cache updates its own storage and synchronously writes to the data store.
    3. Write is acknowledged only after both are updated.

Pros

Strong Consistency – Ensures that data in the cache is always up to date with the database.

No Cache Miss Delays – Since data is written to the cache immediately, subsequent reads will be fast.

✔ Reduced Database Load – Reads are served directly from the cache.

Cons

Slower Writes – Since every write updates both the cache and the database, latency increases.

Unnecessary Caching of Less-Used Data – Even rarely accessed data is cached, increasing memory usage.

  • Example use cases:
    • Financial Systems: Banking applications where data consistency is critical.
    • Session Management: Keeping user session data synchronized across distributed systems. 

3. Write-Back Cache (Write-Behind)

In a write-back cache, writes are made to the cache first, and updates to the underlying data store are deferred (asynchronously). This reduces write latency since the database is updated only after a delay.

  • How It Works:
    1. Application writes to the cache.
    2. Cache acknowledges the write immediately.
    3. Cache later syncs the data to the data store (e.g., in batches or at intervals).

Pros

✔ Faster Writes – Writes are performed in memory first, reducing response time.

✔ Batch Writes – Multiple updates can be grouped into a single batch to optimize database writes.

✔ Improves Database Performance – Reduces the number of direct database writes.

Cons

Risk of Data Loss – If the cache fails before persisting data to the database, recent writes may be lost.

Complex Implementation – Requires mechanisms for handling failures and ensuring durability.

  • Example use cases:
    • Logging Systems: High-throughput logging where logs are buffered before being written to storage.
    • Analytics: Clickstream data processing before persisting in a database.

4. Write-Around Cache

In a write-around cache, writes bypass the cache entirely and go directly to the underlying data store. The cache only stores frequently accessed data, avoiding unnecessary cache pollution.

  • How It Works:
    1. Application writes directly to the data store.
    2. Cache isn’t updated during the write.
    3. Future reads may trigger cache population (e.g., via read-through).

Pros

✔ Prevents Caching of Cold Data – Data that is written once and never read does not consume cache memory.

✔ Efficient Memory Usage – Only popular items remain in the cache.

Cons

Cache Misses on Recent Writes – Since new writes are not cached, reading immediately after writing will result in a cache miss.

Higher Read Latency – Applications relying heavily on cache hits may experience increased database queries. Not ideal for frequently updated, frequently read data.

  • Example use cases:
    • Content Delivery Networks (CDNs): Storing frequently accessed web assets while new content is retrieved from the origin server.
    • Streaming Services: Caching popular video metadata but not every user-uploaded video.

5. Cache-Aside (Lazy Loading)

In a cache-aside strategy, the application is responsible for managing the cache. It explicitly checks the cache, fetches from the data store on a miss, and updates the cache manually.

  • How It Works:
    1. Application checks cache for data:
      • Hit: Returns data.
      • Miss: Queries data store, updates cache, then returns data.
    2. Writes go to the data store, and the application decides whether to update or invalidate the cache.

Pros

Efficient Memory Usage – Only frequently accessed data is stored in the cache.

Reduces Stale Data – Data is fetched from the database only when required.

Simple Implementation – Works well with existing applications.

Cons

Cache Miss Penalty – Every cache miss results in a direct database query, increasing latency.

No Automatic Cache Population – Data must be manually loaded into the cache.

  • Example use cases:
    • Web Applications: Caching rendered HTML pages.
    • API Rate Limiting: Storing API responses to reduce backend load.

Comparison of Caching Strategies

TypeRead LatencyWrite LatencyConsistencyData Loss RiskComplexity
Read-ThroughHigh on missN/AStrongLowLow (cache-side)
Write-ThroughLowHighStrongLowModerate
Write-BackLowLowEventualHighHigh
Write-AroundHigh on missLowWeakLowLow
Cache-AsideHigh on missVariableVariableLowHigh (app-side)

Conclusion:

Choosing the right caching strategy depends on your use case:

  • For fast reads with automatic caching → Use Read-Through.
  • For strong consistency → Use Write-Through.
  • For performance-optimized writes → Use Write-Behind.
  • For application-controlled caching → Use Cache-Aside.
  • For avoiding unnecessary caching → Use Write-Around.

Consistency Levels in Distributed Systems

In distributed systems, consistency defines the guarantees about the visibility and ordering of updates across different nodes. The choice of consistency level impacts performance, availability, and reliability.  

Understanding consistency levels helps designing the system effectively within given constraints.


1. Strong Consistency

  • Definition: Strong consistency guarantees that every read operation returns the most recent write operation’s result, regardless of which node in a distributed system is accessed. All nodes see the same data at the same time.
  • How It Works: After a write is acknowledged, all subsequent reads (from any client or node) reflect that write. This often requires synchronization mechanisms like locks or consensus protocols.
  • Examples:
    • Relational databases with ACID transactions (e.g., PostgreSQL, MySQL with strict settings).
    • Distributed systems using two-phase commit (2PC).
    • Distributed Databases: Google Spanner, CockroachDB
  • Advantages:
    • Predictable and intuitive behavior for applications (what you write is what you read immediately).
    • Ideal for systems requiring absolute data correctness, like financial transactions.
  • Disadvantages:
    • High latency due to coordination overhead between nodes.
    • Reduced availability in the face of network partitions (per the CAP theorem, strong consistency often sacrifices availability).
  • Use Case: Banking systems where account balances must always reflect the latest transactions.

2. Weak Consistency

  • Definition: Weak consistency does not guarantee that a read operation will reflect the most recent write. Updates may propagate to nodes lazily, and clients might see stale or inconsistent data temporarily.
  • How It Works: Nodes operate independently, and synchronization happens opportunistically (e.g., via gossip protocols or background replication). There’s no strict ordering of operations.
  • Examples:
    • Early distributed systems with minimal coordination.
    • DNS (Domain Name System), where updates propagate slowly.
  • Advantages:
    • High availability and low latency since operations don’t block for synchronization.
    • Scales well in distributed environments.
  • Disadvantages:
    • Unpredictable data states; applications must handle inconsistencies.
    • Not suitable for systems requiring immediate accuracy.
  • Use Case: Social media "like" counters, where slight delays in reflecting totals are acceptable.

3. Eventual Consistency

  • Definition: A specific form of weak consistency where, given enough time and no new updates, all nodes will eventually reflect the same data. It promises convergence rather than immediate agreement.
  • How It Works: Writes propagate asynchronously across nodes. Conflicts may arise but are resolved over time (e.g., via last-write-wins, version vectors, or manual reconciliation).
  • Examples:
    • NoSQL databases like Cassandra, DynamoDB, or Riak.
    • Distributed caches like Memcached (with eventual replication).
  • Advantages:
    • High availability and partition tolerance (aligned with the CAP theorem’s “AP” systems).
    • Good performance for read-heavy or geographically distributed systems.
  • Disadvantages:
    • Temporary inconsistencies can confuse users or applications.
    • Conflict resolution logic may be complex.
  • Use Case: E-commerce product catalogs, where slight delays in stock updates are tolerable.

Comparison of Strong vs. Weak/Eventual Consistency


AspectStrong ConsistencyWeak ConsistencyEventual Consistency
Read GuaranteeLatest write always visibleNo guarantee of latest dataLatest data eventually visible
LatencyHigher (due to sync)Lower (async operations)Lower (async propagation)
AvailabilityLower (blocks on failure)HigherHigher
ComplexitySimpler for apps, harder for systemHarder for apps, simpler for systemModerate for both
CAP TheoremPrioritizes C (Consistency)Prioritizes A (Availability)Prioritizes A and P (Partition Tolerance)

Other Consistency Models


To provide a broader context, here are additional consistency levels often encountered:

  • Causal Consistency: Ensures that causally related operations (e.g., a write followed by a read) are seen in the correct order, but unrelated operations may appear out of sync. Used in systems like COPS or Bayou.
  • Read-Your-Writes Consistency: Guarantees that a client sees their own previous writes in subsequent reads, even if other clients see stale data. Common in session-based systems.
  • Monotonic Reads Consistency: Ensures that if a client reads a value, it won’t see an older value in later reads (data moves forward). Useful in distributed file systems.
  • Bounded Staleness: A hybrid model where reads may lag behind writes by a defined time or version threshold (e.g., Google Spanner).

Real-World Context


  • Strong Consistency: Used in Google’s Spanner (with TrueTime for global synchronization) or traditional RDBMS for critical operations.
  • Eventual Consistency: Powers Amazon DynamoDB (tunable consistency) and Netflix’s Cassandra deployment for user data.
  • Weak Consistency: Seen in early peer-to-peer systems or applications where immediate accuracy isn’t critical.

Consistency choice depends on application needs. For example, a chat app might use eventual consistency for message delivery but strong consistency for user authentication. The CAP theorem (Consistency, Availability, Partition Tolerance—pick two) often guides these decisions in distributed systems.


Comparison of Consistency Models

Consistency LevelGuaranteesPerformance ImpactUse Case
Strong ConsistencyAlways latest dataHigh latencyBanking, financial transactions
Eventual ConsistencyData converges over timeLow latency, high availabilitySocial media, caching
Causal ConsistencyMaintains causal orderMedium latencyChat apps, collaborative editing
Read-Your-WritesUser sees their own writesMedium latencyCloud storage, user preferences
Monotonic ReadsNo time-travel readsMedium latencyDNS, user sessions



Golang: Http POST Request with JSON Body example

Go standard library comes with "net/http" package which has excellent support for HTTP Client and Server.  
In order to post JSON body during post request, we need to convert the data to byte array format and send it along with the request.
You can convert the JSON to a byte array using "encoding/json" package. Then use the NewBuffer method to pass this byte array to the post method.

package main

import (
	"bytes"
	"encoding/json"
	"fmt"
	"io/ioutil"
	"net/http"
)

//If the struct variable names does not match with json attributes 
//then you can define the json attributes actual name after json:attname as shown below. 
type User struct {
	Name string  	`json:"name"`
	Job string 	`json:"job"`
}

func main(){

	//Create user struct which need to post.
	user := User{
		Name: "Test User",
		Job: "Go lang Developer",
	}

	//Convert User to byte using Json.Marshal
	//Ignoring error to
	body, _ := json.Marshal(user)

	//Pass new buffer for request with URL to post.
	resp, err := http.Post("https://reqres.in/api/users", "application/json", bytes.NewBuffer(body) )

	// An error is returned if there were too many redirects
	// or if there was an HTTP protocol error
	if err != nil {
		panic(err)
	}
	//Need to close the response stream, once response is read.
	//Hence defer close. It will automatically take care of it.
	defer resp.Body.Close()

	//Check response code, if New user is created then read response.
	if resp.StatusCode == http.StatusCreated {
		body, err := ioutil.ReadAll(resp.Body)
		if err != nil {
			//Failed to read response.
			panic(err)
		}

		//Convert bytes to String and print
		jsonStr := string(body)
		fmt.Println("Response: ", jsonStr)

	} else {
		//The status is not Created. print the error.
		fmt.Println("Get failed with error: ", resp.Status)
	}
}

Caching is a technique used to store frequently accessed data in a temporary storage layer to improve system performance and reduce latency....