Redis vs Memcached – What’s the Difference ? (Pros and Cons)

Redis vs Memcached – What’s the Difference ? (Pros and Cons). One of the developers’ main goals while building dynamic applications is to maximize the application’s speed. Therefore, caching is one of the most efficient ways of improving the operating speed of a machine. Hence, that is essential in application development. Since, it is very useful in storing data calculations; that are repetitive and very time consuming to compute.

So then, if you have an application that performs multiple database reads, replacing a portion of the database with reads from the Cache will speed up the application. Also it will eliminate latency from repetitive database access. 

Both, Redis and Memcached are both in memory databases. Additionally, they are free and open source. However, they’re pretty different in their functionality and features. In this article, we’ll explain the features that distinguish Redis from Memcached, their pros and cons, and their differences.

Shall we start with Redis vs Memcached – What’s the Difference ?

What is Redis?

First of all, Redis stands for Remote Dictionary Server. Basically, it is in memory data management store used by millions of developers. Mainly used as a database, cache, streaming engine, and message broker. Salvatore Sanfilippo developed the Redis project in 2009. At the time of developing Redis, Salvatore just wanted to improve the scalability of his start-up. As a result, he developed Redis, which now serves as a database, Cache, and message broker.

Secondly, Redis was made, so that data will always be read and modified within the computer memory; while also being stored on a disk format unsuitable for random data access. Hence, data stored in that way supports fast retrieval without requiring help from the database system. Due to Redis’ sub millisecond response times, it enables millions of requests per second. Moreover, it is widely used in industries like IoT, gaming, finance, etc.

Up next with Redis vs Memcached – What’s the Difference ? is time to learn what are the Redis features. 

Features of Redis

  • Redis handles data writing in two different ways; Snapshotting and Journaling. Basically snapshotting uses the Redis database (RDB) to create snapshots of your data sets at specific intervals. While Journaling method logs every write operation received by the server and adds them to an Append Only File (AOF). As a result, this enables Redis to rewrite the background log when it gets too big. In the event of a system failure, you only lose a few seconds of data. 
  • Redis provides users with a collection of native data types that they can use to solve a wide variety of problems. As can be seen, the core data structures include strings, lists, sets, hashes, sorted sets, bitmaps, and more. These core structures can handle caching, queueing, event processing, etc. 
  • Redis Cluster is a deployment topology that provides users with an efficient way to run a Redis installation. In essence, data is shared simultaneously across multiple nodes. Besides, Redis Clustering supports partitioning, allowing you to split your data set across multiple nodes. When some nodes fail to communicate, the Redis cluster allows operations to continue. However, if a large proportion of the clusters are inoperable, the clusters will stop operating. 
  • Redis provides users with a programmable interface to execute custom scripts on the server. Furthermore, from Redis 7 update and above, you can use Redis Functions to manage as well as execute scripts directly on the server. Another key point is that, during script execution Redis blocks all server activities preventing you from running any command. You can also use Lua scripting with EVAL command to program the server. 

Pros of Redis

  • Performance is one of Redis’ winning qualities. Because Redis stores information in the system memory, it maximizes performance by enabling low latency and swift data access. As a result, Redis is much faster than traditional databases. Why? because Redis is an in memory database store. In addition, it can perform operations without interacting with the disk, thereby minimizing engine latency to a few microseconds.
  • Traditional databases require very complex and voluminous code for them to work optimally. However, with Redis, you only need to write very few lines of code to perform all the basic operations for storing and accessing data. Unlike traditional databases, Redis doesn’t require a query language; developers can use a command structure instead of a query language.
  • Data replication is one of the standout features of Redis. Certainly, it creates replica copies of a master instance (master replica), that is simple and easy to configure. When a master replica creates copies of itself, the replica always connect to the master every time the link breaks. No matter what happens to the master replica, the copies  always remain an exact replica of the master. 
  • For most traditional caches and database systems, it’s impossible to insert large amounts of data into your Cache. However, Redis allows users to load up millions of data into the Cache within a very short period. As noted, users can accomplish this through Redis’ mass insertion feature. 

Cons of Redis

  • Redis is an in memory data store. Especially, it uses your system’s RAM to perform operations. If you have a database that makes millions of requests per second, you’d need a very powerful machine to allow Redis to work efficiently. On average, Redis requires 2 to four times the RAM size of your machine. This can cause an overload of your machine and possibly cause it to crash. However, you minimize this risk by configuring a maximum memory limit for Redis. 
  • Redis is a data structure server; there are no query languages, only commands. Even though you can perform server side scripting with Lua, Redis doesn’t support query languages supported by popular relational database management systems (RDBMS). 

Another tool in our comparison Redis vs Memcached – What’s the Difference ? is Memcached. 

What is Memcached?

Memcached is a free general purpose distributed caching system. Mainly used to speed up dynamic web applications by reducing the database load. Secondly, Memcached is also an in-memory key value store for small data chunks, database calls, API calls, or page rendering. The software is open source and runs on Unix like operating systems (Linux and macOS) and also Windows

Following, Memcached was first developed in 2003 by Brad Fitzpatrick to speed up his website – Live Journal. Brad initially wrote the software in Perl, but it was later rewritten in C programming language by a Live Journal employee – Anatoly Vorobey. Over the years, Memcached has grown from a simple website caching program to one of the world’s most widely used distributed caching systems used by Fortune companies such as YouTube, Facebook, Twitter, Amazon Web Services, etc.

Here, with Redis vs Memcached – What’s the Difference ? we will aim to know Memcached features. 

Features of Memcached

  • Memcached is a standalone service. Therefore, it will run irrespective of the application status. Even if you take down your application or you’re experiencing downtime problems, the cached data will remain in memory as long as the service runs. 
  • Memcached uses distributed caching method to speed up dynamic applications. Also, it reduces the load on backend systems by pulling resources from multiple inter networked computers into a single in-memory data store. This allows applications to make faster database calls and read and write information faster, allowing users to scale applications to their maximum capacity. 
  • Memcached uses Check and Snap (CAS) tokens. Memcached CAS is an operation that stores data only if no one has updated the data since you last read it. In other words, when multiple users are fetching the same data and updating it, Memcached uses the CAS command to set the data if it is not updated since the last fetch. This feature is very useful in resolving problems associated with updating cached data. 
  • Users can achieve communication with Memcached servers by using the UDP or TCP protocol. You can use a simple text based interface to exchange information if you’re communicating with the servers via TCP. Clients don’t require login or authentication to open a connection to the server and you can terminate the server connection without making any specific disconnection commands to the server. 

Pros of Memcached

  • Unlike relational database management systems that utilize PostgreSQL and MongoDB to store most of their data on a physical disk. Further, Memcached stores all of its data in the server’s internal memory. Furthermore, in memory data stores don’t require frequent visits to the disks when operating. This increases the speed of operations, supporting sub millisecond response times. 
  • Memcached’s multithreaded architecture makes it easy to scale. You can split data across multiple nodes allowing you to scale out of capacity by adding new nodes to your cluster. With Memcached, you can scale up your computing capacity allowing you to build highly scalable applications that are fast and reliable. 
  • Memcached is a very simplistic and generic memory caching system. This makes it very flexible in application development allowing developers to easily scale their applications without requiring complex database programming. Hence, Memcached supports many languages, including PHP, C#, C++/, Python, Node.js, Ruby, Go, etc. 
  • Memcached provides developers with one of the cheapest and most efficient methods of speeding up applications. Forthwith, the project remains free and open source providing developers with a very efficient and cost effective way of speeding up web applications.

Cons of Memcached

  • Memcached’s architecture prevents users from viewing data. This was a design choice by the Memcached developers, as providing access to data might have a negative impact on performance. However, debugging is much harder because you cannot get the Memcached server to report which keys it holds. 
  • Memcached isn’t designed primarily to be a security software. Generally it helps speed up your applications. If you’re on a shared system, you’d need additional security features like third party firewalls to ensure your web application remains secure. 

Now with Redis vs Memcached –it is time to learn what’s the Difference?

Differences Between Redis and Memcached

Data Types Support


One of the obvious differences between Redis and Memcached is that Redis supports multiple data types. Furthermore Redis supports a wide range of data types like strings, hashes, bitmaps, sorted sets, hyper logs, and many more. These data types can help you solve problems like caching, queuing, event processing, etc. 


On the other hand, Memcached doesn’t support as many data types compared to Redis. Henceforth, Memcached supports only the String and Integer Data types. Hence, any value you’re storing is limited to either of these two data types. In addition, the only possible manipulation you can do with integers in Memcached is adding or subtracting them.

Data Persistence


On one side, Redis supports persistence. Very powerful feature that minimizes the risk of data loss during system failure. Even more, Redis can rewrite background logs due to its Append Only File (AOF). In the event of a system failure, users will lose only a few seconds of data when the system restarts. In addition to AOF, users can create snapshots of their system and restore them to the most recent snapshot in case of a system failure. 


Memcached does not support persistence. If you experience system failure while using Memcached, you can lose data when the system restarts. However, you can still restore lost data by rebuilding caches. Indeed Cache rebuilding might be expensive, especially if the lost data can be forgone. 

Cache Expiration


Instead Redis utilizes two main algorithms to determine cache expiration: Least Frequently Used (LFU) and Least Recently Used (LRU). The LFU algorithm provides a much better cache eviction method, and a better hits/misses ratio. With LFU, Redis will track each item in the cache and determine the one that you rarely use. Then, it will eliminate that item to make space in the cache, i.e., Keys used often have a higher chance of staying longer in the memory.

While with LRU, Redis organizes the data in the cache and determines which of the items was recently used. This operates differently than frequently used because a cache can be frequently used but not recently used. The LRU algorithm determines which item you haven’t used in the longest time and evicts them from the cache.


On the other hand, Memcached uses a modern LRU algorithm similar to the Varnish massive storage engine. For example, Memcached eliminates items by organizing items in a double linked list. Hot items in the lists are considered “safe” and are practically free from eviction. When items are less recently used, they move to the colder side of the list. The colder they are on the list, the more likely they’ll be evicted from the cache. The coldest of the items on the list ends up facing eviction. 

Memory Organization


Redis allocates memory based on the item stored. Whenever users store an item on Redis, it allocates memory using a malloc call and stores them in the allocated space. This process is repetitive; Redis makes a recurring malloc call to allocate memory for storage with each item added. However, in more recent updates of Redis, it utilizes jemalloc to make calls. So, Jemalloc helps solve issues of fragmentation that malloc couldn’t solve. 


Memcached allocates memory into chunks, pages, and slabs. Users define the required memory needed to store items, and you define the maximum memory that each item is allowed to take to the nearest MB. By default, Memcached will allocate as much memory as needed for item storage but will eventually break them into 1MB pages.

Ease of Use


The developers designed Redis to be a simple, fast, and efficient way of speeding up dynamic applications. With Redis, users write fewer lines of code to store and access data in applications. Developers don’t have to write complex queries used in traditional databases. You can easily deploy Redis in any operating system by typing a few commands to the console.


Memcached is not a difficult application to use, but it is not as simplistic as Redis. Implementing Memcached requires tons of third party application level integration. Making changes to the features provided by Memcached requires a significant level of coding. Actions like cache expiration or serving up data for dynamic content are not as straightforward as in Redis.  

Performance Architecture


Redis utilizes single core performance in its application. On previous versions of Redis (i.e., before Redis 6), the application did not support multi threaded I/O. Users had to deploy Redis on machines with single cores for single threading. Markedly Redis’ single threading architecture has created some conflicts with developers who argue that if Redis is a single threaded application, it might not be as fast as we think.


On the other hand, Memcached supports multi threading. Users can run multiple instances of Memcached on machines with multiple cores to help boost performance and increase the caching speed of the application. Unlike Redis, Memcached has no problem saturating multiple cores allowing the application to scale better on machines with multiple cores. 

Thank you for your time with Redis vs Memcached – What’s the Difference ? We shall conclude. 

Redis vs Memcached - What's the Difference ? Conclusion

Summing up, both Redis and Memcached are very powerful caching systems. They help speed up web applications and are open source and free to use. However, Redis provides more functionality like server side scripting, multiple data types, disk persistence, and many more. If your main use case of a server caching system involves using features like Persistence and multiple data type support, Redis is your best bet.

Please read more of our blog content about Redis over here

Avatar for Kamso Oguejiofor
Kamso Oguejiofor

Kamso is a mechanical engineer and writer with a strong interest in anything related to technology. He has over 2 years of experience writing on topics like cyber security, network security, and information security. When he’s not studying or writing, he likes to play basketball, work out, and binge watch anime and drama series.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x