Hibernate cache in theory

By | December 11, 2016 | 611 Views

The note will follow:

1. Introduction about hibernate cache. 4. In-process caching and out-process caching architecture.
2. When we apply cache efficiently? 5. Discussion.
3. Cache-provider/concurrency-strategy compatibility. 6. References

1. Introduction.

Hibernate has a two-level cache architecture: first-level cache and second-level cache.
hibernates-two-level-cache-architecture

Figure 1: Hibernate’s two levels cache (source: Manning)

  • The first-level cache is the persistence context cache (scope of Session). The cache ensures that when the application requests the same persistent object twice in a particular session, it gets back the same (identical) Java instance. Moreover, all entity instances handled in a particular unit work. Particularly, Hibernate does not share cache between threads; each application thread has its own copy of the data in cache. As a result, this prevents issues with translation isolation as well as concurrency once accessing the cache.
  • The second-level cache is  process or cluster scope in the JVM (scope of SessionFactory).  Multiple application threads may access the shared second-level caches concurrently. The second-level cache is very useful for reducing read-write transaction response time, especially in Master-Slave replication architectures.

2. When we apply cache efficiently?

The cache is usually useful only for read mostly classes” (Hibernate in Action, Christian B. , Gavin K.)
Looking at domain model diagram, good candidates for caching are entity classes that show:

  • Data that changes rarely.
  • Non-critical data (for example, content-management data).
  • Data that’s local to the application and not modified by other applications.

Bad candidates include:

  • Data that is updated often.
  • Financial data, where decisions must be based on the latest update.
  • Data that is shared with and/or written by other applications.

Reference data that is enough to fix to memory, rarely/never updated such as ZIP codes, locations, static text messages, and so on is excellent candidates to think about shared catching.

3. Cache-provider/concurrency-strategy compatibility.

Hibernate second-level cache is set up in two steps: concurrency strategy and then cache provider. To implement councurrency strategy, using net.sf.hibernate.cache.CacheConcurrencyStrategyFor cache providers, following table shows Cache concurrency strategy support:

Cache read-only nonstrict-read-write read-write transactional
Hashtable (not intended for production use) yes yes yes
EHCache yes yes yes
OSCache yes yes yes
SwarmCache yes yes
JBoss Cache 1.x yes yes
JBoss Cache 2 yes yes

Table 1: Cache Concurrency Strategy Support.(source: jboss.org)

Hibernate Performance Tuning and Best Practices (Vlad Mihalcea) is the good example about caching. The full practice of Ehcache will be explained in other topics.

4. In-process caching and out-process caching architecture.

Architecture is a very important factor to get successfully caching efficiency. There are two types of caching architectures: In-process caching and out-process caching architecture.

4.1 In-process caching.

In-process caching, the cache layer is embedded directly in the application. Developers handle all cache policies within an application. This is the most common scenario.

in-process-caching
Figure 2: In-process caching

The advantages of in-process caching The disadvantages of in-process caching
Simple implementation and maintenance. Higher consumption of memory and/or CPU.
Quick access to cached objects. Increase in the garbage collector load.
Data inconsistency (multiple copies of objects stored at a large volume).

4.2 Out-process caching.

Out-process caching often refers to as Distributed Caching where application talks directly to an external component (cache server) and get data in case the cache server is configured as a primary resource. This caching model is suitable for majority of enterprise applications.

out-process-caching
Figure 3: Out-process caching.

From the figure above, the communication happens between Server cluster and Server cache through common means like RMI and web services, which leads to the fact that network issues (How high is the network’s latency? How fast the bit rate is?) need to be considered carefully before applying the model.

The advantages of out-process caching The disadvantages of out-process caching
Decrease in the garbage collector load. Requires network access.
Effective memory and CPU memory increases. Implementation is difficult.
Single shared object storage

5. Discussion.

Purpose of cache is to store and reuse as fast as possible. The project architecture and performance bottlenecks are key impacts to select right caching solution. These impacts such as high data loads,high memory/CPU consumption, proven delays while serving content, latency in your web services layer, or even if you verify that your cluster is not performing adequately need to take into account thoughtfully.

References:

  • Christian B. and Gavin K., “Transaction, Concurrency, and Caching”, In Hibernation In Action, Chapter 5, Manning Publishing, 2005
  • Christian B., Gavin K., Gary G., “Caching Data”, In Java Persistence with Hibernate, Chapter 20, pp.544 – 516, Second Edition, Manning Publishing, 2016.
  • Daniel Wind, “Core Concepts”, In Instant Effective Caching with Ehcache, Packt Publishing, 2016.
  • Hibernate Performance Tuning and Best Practices, Vlad Mihalcea. Retrieved from http://in.relation.to/2016/09/28/performance-tuning-and-best-practices/ on Dec 11, 2016.
  • Hibernate ORM documentation (5.2), Retrieved from http://hibernate.org/orm/documentation/5.2/ on Dec 11, 2016.
  • High-Performance Hibernate (Vlad Mihalcea), Youtube. Retrieved from https://www.youtube.com/watch?v=BTdTEe9QL5k&t=1s

Leave a Reply

Your email address will not be published. Required fields are marked *