redis 命令

string

>APPEND key value

Append a value to a key

Available since 2.0.0.

Time complexity: O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.

If key already exists and is a string, this command appends the value at the end of the string. If key does not exist it is created and set as an empty string, so APPEND will be similar to SET in this special case.

Return value

Integer reply: the length of the string after the append operation.

Examples

redis>  EXISTS mykey

(integer) 0

redis>  APPEND mykey “Hello”

(integer) 5

redis>  APPEND mykey " World"

(integer) 11

redis>  GET mykey

"Hello World"

Pattern: Time series

The APPEND command can be used to create a very compact representation of a list of fixed-size samples, usually referred as time series. Every time a new sample arrives we can store it using the command

APPEND timeseries "fixed-size sample"

Accessing individual elements in the time series is not hard:

  • STRLEN can be used in order to obtain the number of samples.
  • GETRANGE allows for random access of elements. If our time series have associated time information we can easily implement a binary search to get range combining GETRANGE with the Lua scripting engine available in Redis 2.6.
  • SETRANGE can be used to overwrite an existing time series.

The limitation of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable.

Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more friendly to be distributed across many Redis instances.

An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations).

redis>  APPEND ts “0043”

(integer) 4

redis>  APPEND ts “0035”

(integer) 8

redis>  GETRANGE ts 0 3

"0043"

redis>  GETRANGE ts 4 7

"0035"

>BITCOUNT key [start end]

Count set bits in a string

Available since 2.6.0.

Time complexity: O(N)

Count the number of set bits (population counting) in a string.

By default all the bytes contained in the string are examined. It is possible to specify the counting operation only in an interval passing the additional arguments start and end.

Like for the GETRANGE command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth.

Non-existent keys are treated as empty strings, so the command will return zero.

Return value

Integer reply

The number of bits set to 1.

Examples

redis>  SET mykey “foobar”

OK

redis>  BITCOUNT mykey

(integer) 26

redis>  BITCOUNT mykey 0 0

(integer) 4

redis>  BITCOUNT mykey 1 1

(integer) 6

Pattern: real-time metrics using bitmaps

Bitmaps are a very space-efficient representation of certain kinds of information. One example is a Web application that needs the history of user visits, so that for instance it is possible to determine what users are good targets of beta features.

Using the SETBIT command this is trivial to accomplish, identifying every day with a small progressive integer. For instance day 0 is the first day the application was put online, day 1 the next day, and so forth.

Every time a user performs a page view, the application can register that in the current day the user visited the web site using the SETBIT command setting the bit corresponding to the current day.

Later it will be trivial to know the number of single days the user visited the web site simply calling the BITCOUNT command against the bitmap.

A similar pattern where user IDs are used instead of days is described in the article called “Fast easy realtime metrics using Redis bitmaps”.

Performance considerations

In the above example of counting days, even after 10 years the application is online we still have just 365*10 bits of data per user, that is just 456 bytes per user. With this amount of data BITCOUNT is still as fast as any other O(1) Redis command like GET or INCR.

When the bitmap is big, there are two alternatives:

  • Taking a separated key that is incremented every time the bitmap is modified. This can be very efficient and atomic using a small Redis Lua script.
  • Running the bitmap incrementally using the BITCOUNT start and end optional parameters, accumulating the results client-side, and optionally caching the result into a key.

>BITOP operation destkey key [key …]

Perform bitwise operations between strings

Available since 2.6.0.

Time complexity: O(N)

Perform a bitwise operation between multiple keys (containing string values) and store the result in the destination key.

The BITOP command supports four bitwise operations: AND, OR, XOR and NOT, thus the valid forms to call the command are:

  • BITOP AND destkey srckey1 srckey2 srckey3 … srckeyN

  • BITOP OR destkey srckey1 srckey2 srckey3 … srckeyN

  • BITOP XOR destkey srckey1 srckey2 srckey3 … srckeyN

  • BITOP NOT destkey srckey

As you can see NOT is special as it only takes an input key, because it performs inversion of bits so it only makes sense as an unary operator.

The result of the operation is always stored at destkey.

Handling of strings with different lengths

When an operation is performed between strings having different lengths, all the strings shorter than the longest string in the set are treated as if they were zero-padded up to the length of the longest string.

The same holds true for non-existent keys, that are considered as a stream of zero bytes up to the length of the longest string.

Return value

Integer reply

The size of the string stored in the destination key, that is equal to the size of the longest input string.

Examples

redis>  SET key1 “foobar”

OK

redis>  SET key2 “abcdef”

OK

redis>  BITOP AND dest key1 key2

(integer) 6

redis>  GET dest

"`bc`ab"

Pattern: real time metrics using bitmaps

BITOP is a good complement to the pattern documented in the BITCOUNT command documentation. Different bitmaps can be combined in order to obtain a target bitmap where the population counting operation is performed.

See the article called “Fast easy realtime metrics using Redis bitmaps” for a interesting use cases.

Performance considerations

BITOP is a potentially slow command as it runs in O(N) time. Care should be taken when running it against long input strings.

For real-time metrics and statistics involving large inputs a good approach is to use a slave (with read-only option disabled) where the bit-wise operations are performed to avoid blocking the master instance.

>BITPOS key bit [start] [end]

Find first bit set or clear in a string

Available since 2.8.7.

Time complexity: O(N)

Return the position of the first bit set to 1 or 0 in a string.

The position is returned, thinking of the string as an array of bits from left to right, where the first byte’s most significant bit is at position 0, the second byte’s most significant bit is at position 8, and so forth.

The same bit position convention is followed by GETBIT and SETBIT.

By default, all the bytes contained in the string are examined. It is possible to look for bits only in a specified interval passing the additional arguments start and end (it is possible to just pass start, the operation will assume that the end is the last byte of the string. However there are semantical differences as explained later). The range is interpreted as a range of bytes and not a range of bits, so start=0 and end=2 means to look at the first three bytes.

Note that bit positions are returned always as absolute values starting from bit zero even when start and end are used to specify a range.

Like for the GETRANGE command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth.

Non-existent keys are treated as empty strings.

Return value

Integer reply

The command returns the position of the first bit set to 1 or 0 according to the request.

If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is returned.

If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns the first bit not part of the string on the right. So if the string is three bytes set to the value 0xff the command BITPOS key 0 will return 24, since up to bit 23 all the bits are 1.

Basically, the function considers the right of the string as padded with zeros if you look for clear bits and specify no range or the start argument only.

However, this behavior changes if you are looking for clear bits and specify a range with both start and end. If no clear bit is found in the specified range, the function returns -1 as the user specified a clear range and there are no 0 bits in that range.

Examples

redis>  SET mykey “\xff\xf0\x00”

OK

redis>  BITPOS mykey 0

(integer) 12

redis>  SET mykey “\x00\xff\xf0”

OK

redis>  BITPOS mykey 1 0

(integer) 8

redis>  BITPOS mykey 1 2

(integer) 16

redis>  set mykey “\x00\x00\x00”

OK

redis>  BITPOS mykey 1

(integer) -1

>DECR key

Decrement the integer value of a key by one

Available since 1.0.0.

Time complexity: O(1)

Decrements the number stored at key by one. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

See INCR for extra information on increment/decrement operations.

Return value

Integer reply: the value of key after the decrement

Examples

redis>  SET mykey “10”

OK

redis>  DECR mykey

(integer) 9

redis>  SET mykey “234293482390480948029348230948”

OK

redis>  DECR mykey

ERR value is not an integer or out of range

>DECRBY key decrement

Decrement the integer value of a key by the given number

Available since 1.0.0.

Time complexity: O(1)

Decrements the number stored at key by decrement. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

See INCR for extra information on increment/decrement operations.

Return value

Integer reply: the value of key after the decrement

Examples

redis>  SET mykey “10”

OK

redis>  DECRBY mykey 3

(integer) 7

>GET key

Get the value of a key

Available since 1.0.0.

Time complexity: O(1)

Get the value of key. If the key does not exist the special value nil is returned. An error is returned if the value stored at key is not a string, because GET only handles string values.

Return value

Bulk string reply: the value of key, or nil when key does not exist.

Examples

redis>  GET nonexisting

(nil)

redis>  SET mykey “Hello”

OK

redis>  GET mykey

"Hello"

>GETBIT key offset

Returns the bit value at offset in the string value stored at key

Available since 2.2.0.

Time complexity: O(1)

Returns the bit value at offset in the string value stored at key.

When offset is beyond the string length, the string is assumed to be a contiguous space with 0 bits. When key does not exist it is assumed to be an empty string, so offset is always out of range and the value is also assumed to be a contiguous space with 0 bits.

Return value

Integer reply: the bit value stored at offset.

Examples

redis>  SETBIT mykey 7 1

(integer) 0

redis>  GETBIT mykey 0

(integer) 0

redis>  GETBIT mykey 7

(integer) 1

redis>  GETBIT mykey 100

(integer) 0

>GETRANGE key start end

Get a substring of the string stored at a key

Available since 2.4.0.

Time complexity: O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.

Warning: this command was renamed to GETRANGE, it is called SUBSTR in Redis versions <= 2.0.

Returns the substring of the string value stored at key, determined by the offsets start and end (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string. So -1 means the last character, -2 the penultimate and so forth.

The function handles out of range requests by limiting the resulting range to the actual length of the string.

Return value

Bulk string reply

Examples

redis>  SET mykey “This is a string”

OK

redis>  GETRANGE mykey 0 3

"This"

redis>  GETRANGE mykey -3 -1

"ing"

redis>  GETRANGE mykey 0 -1

"This is a string"

redis>  GETRANGE mykey 10 100

"string"

>GETSET key value

Set the string value of a key and return its old value

Available since 1.0.0.

Time complexity: O(1)

Atomically sets key to value and returns the old value stored at key. Returns an error when key exists but does not hold a string value.

Design pattern

GETSET can be used together with INCR for counting with atomic reset. For example: a process may call INCR against the key mycounter every time some event occurs, but from time to time we need to get the value of the counter and reset it to zero atomically. This can be done using GETSET mycounter “0”:

redis>  INCR mycounter

(integer) 1

redis>  GETSET mycounter “0”

"1"

redis>  GET mycounter

"0"

Return value

Bulk string reply: the old value stored at key, or nil when key did not exist.

Examples

redis>  SET mykey “Hello”

OK

redis>  GETSET mykey “World”

"Hello"

redis>  GET mykey

"World"

>INCR key

Increment the integer value of a key by one

Available since 1.0.0.

Time complexity: O(1)

Increments the number stored at key by one. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

Note: this is a string operation because Redis does not have a dedicated integer type. The string stored at the key is interpreted as a base-10 64 bit signed integer to execute the operation.

Redis stores integers in their integer representation, so for string values that actually hold an integer, there is no overhead for storing the string representation of the integer.

Return value

Integer reply: the value of key after the increment

Examples

redis>  SET mykey “10”

OK

redis>  INCR mykey

(integer) 11

redis>  GET mykey

"11"

Pattern: Counter

The counter pattern is the most obvious thing you can do with Redis atomic increment operations. The idea is simply send an INCR command to Redis every time an operation occurs. For instance in a web application we may want to know how many page views this user did every day of the year.

To do so the web application may simply increment a key every time the user performs a page view, creating the key name concatenating the User ID and a string representing the current date.

This simple pattern can be extended in many ways:

  • It is possible to use INCR and EXPIRE together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds.
  • A client may use GETSET in order to atomically get the current counter value and reset it to zero.
  • Using other atomic increment/decrement commands like DECR or INCRBY it is possible to handle values that may get bigger or smaller depending on the operations performed by the user. Imagine for instance the score of different users in an online game.

Pattern: Rate limiter

The rate limiter pattern is a special counter that is used to limit the rate at which an operation can be performed. The classical materialization of this pattern involves limiting the number of requests that can be performed against a public API.

We provide two implementations of this pattern using INCR, where we assume that the problem to solve is limiting the number of API calls to a maximum of ten requests per second per IP address.

Pattern: Rate limiter 1

The more simple and direct implementation of this pattern is the following:

FUNCTION LIMIT_API_CALL(ip)
ts = CURRENT_UNIX_TIME()
keyname = ip+":"+ts
current = GET(keyname)
IF current != NULL AND current > 10 THEN
    ERROR "too many requests per second"
ELSE
    MULTI
        INCR(keyname,1)
        EXPIRE(keyname,10)
    EXEC
    PERFORM_API_CALL()
END

Basically we have a counter for every IP, for every different second. But this counters are always incremented setting an expire of 10 seconds so that they’ll be removed by Redis automatically when the current second is a different one.

Note the used of MULTI and EXEC in order to make sure that we’ll both increment and set the expire at every API call.

Pattern: Rate limiter 2

An alternative implementation uses a single counter, but is a bit more complex to get it right without race conditions. We’ll examine different variants.

FUNCTION LIMIT_API_CALL(ip):
current = GET(ip)
IF current != NULL AND current > 10 THEN
    ERROR "too many requests per second"
ELSE
    value = INCR(ip)
    IF value == 1 THEN
        EXPIRE(value,1)
    END
    PERFORM_API_CALL()
END

The counter is created in a way that it only will survive one second, starting from the first request performed in the current second. If there are more than 10 requests in the same second the counter will reach a value greater than 10, otherwise it will expire and start again from 0.

In the above code there is a race condition. If for some reason the client performs the INCR command but does not perform the EXPIRE the key will be leaked until we’ll see the same IP address again.

This can be fixed easily turning the INCR with optional EXPIRE into a Lua script that is send using the EVAL command (only available since Redis version 2.6).

local current
current = redis.call("incr",KEYS[1])
if tonumber(current) == 1 then
    redis.call("expire",KEYS[1],1)
end

There is a different way to fix this issue without using scripting, but using Redis lists instead of counters. The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application.

FUNCTION LIMIT_API_CALL(ip)
current = LLEN(ip)
IF current > 10 THEN
    ERROR "too many requests per second"
ELSE
    IF EXISTS(ip) == FALSE
        MULTI
  RPUSH(ip,ip)
  EXPIRE(ip,1)
        EXEC
    ELSE
        RPUSHX(ip,ip)
    END
    PERFORM_API_CALL()
END

The RPUSHX command only pushes the element if the key already exists.

Note that we have a race here, but it is not a problem: EXISTS may return false but the key may be created by another client before we create it inside the MULTI / EXEC block. However this race will just miss an API call under rare conditions, so the rate limiting will still work correctly.

>INCRBY key increment

Increment the integer value of a key by the given amount

Available since 1.0.0.

Time complexity: O(1)

Increments the number stored at key by increment. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

See INCR for extra information on increment/decrement operations.

Return value

Integer reply: the value of key after the increment

Examples

redis>  SET mykey “10”

OK

redis>  INCRBY mykey 5

(integer) 15

>INCRBYFLOAT key increment

Increment the float value of a key by the given amount

Available since 2.6.0.

Time complexity: O(1)

Increment the string representing a floating point number stored at key by the specified increment. If the key does not exist, it is set to 0 before performing the operation. An error is returned if one of the following conditions occur:

  • The key contains a value of the wrong type (not a string).
  • The current key content or the specified increment are not parsable as a double precision floating point number.

If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a string.

Both the value already contained in the string key and the increment argument can be optionally provided in exponential notation, however the value computed after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed.

The precision of the output is fixed at 17 digits after the decimal point regardless of the actual internal precision of the computation.

Return value

Bulk string reply: the value of key after the increment.

Examples

redis>  SET mykey 10.50

OK

redis>  INCRBYFLOAT mykey 0.1

"10.6"

redis>  SET mykey 5.0e3

OK

redis>  INCRBYFLOAT mykey 2.0e2

"5200"

Implementation details

The command is always propagated in the replication link and the Append Only File as a SET operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency.

>MGET key [key …]

Get the values of all the given keys

Available since 1.0.0.

Time complexity: O(N) where N is the number of keys to retrieve.

Returns the values of all specified keys. For every key that does not hold a string value or does not exist, the special value nil is returned. Because of this, the operation never fails.

Return value

Array reply: list of values at the specified keys.

Examples

redis>  SET key1 “Hello”

OK

redis>  SET key2 “World”

OK

redis>  MGET key1 key2 nonexisting

1) "Hello"
2) "World"
3) (nil)

>MSET key value [key value …]

Set multiple keys to multiple values

Available since 1.0.1.

Time complexity: O(N) where N is the number of keys to set.

Sets the given keys to their respective values. MSET replaces existing values with new values, just as regular SET. See MSETNX if you don’t want to overwrite existing values.

MSET is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged.

Return value

Simple string reply: always OK since MSET can’t fail.

Examples

redis>  MSET key1 “Hello” key2 “World”

OK

redis>  GET key1

"Hello"

redis>  GET key2

"World"

>MSETNX key value [key value …]

Set multiple keys to multiple values, only if none of the keys exist

Available since 1.0.1.

Time complexity: O(N) where N is the number of keys to set.

Sets the given keys to their respective values. MSETNX will not perform any operation at all even if just a single key already exists.

Because of this semantic MSETNX can be used in order to set different keys representing different fields of an unique logic object in a way that ensures that either all the fields or none at all are set.

MSETNX is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged.

Return value

Integer reply, specifically:

  • 1 if the all the keys were set.
  • 0 if no key was set (at least one key already existed).

Examples

redis>  MSETNX key1 “Hello” key2 “there”

(integer) 1

redis>  MSETNX key2 “there” key3 “world”

(integer) 0

redis>  MGET key1 key2 key3

1) "Hello"
2) "there"
3) (nil)

>PSETEX key milliseconds value

Set the value and expiration in milliseconds of a key

Available since 2.6.0.

Time complexity: O(1)

PSETEX works exactly like SETEX with the sole difference that the expire time is specified in milliseconds instead of seconds.

Examples

redis>  PSETEX mykey 1000 “Hello”

OK

redis>  PTTL mykey

(integer) 999

redis>  GET mykey

"Hello"

>SET key value [EX seconds] [PX milliseconds] [NX|XX]

Set the string value of a key

Available since 1.0.0.

Time complexity: O(1)

Set key to hold the string value. If key already holds a value, it is overwritten, regardless of its type. Any previous time to live associated with the key is discarded on successful SET operation.

Options

Starting with Redis 2.6.12 SET supports a set of options that modify its behavior:

  • EX seconds – Set the specified expire time, in seconds.
  • PX milliseconds – Set the specified expire time, in milliseconds.
  • NX – Only set the key if it does not already exist.
  • XX – Only set the key if it already exist.

Note: Since the SET command options can replace SETNX, SETEX, PSETEX, it is possible that in future versions of Redis these three commands will be deprecated and finally removed.

Return value

Simple string reply: OK if SET was executed correctly. Null reply: a Null Bulk Reply is returned if the SET operation was not performed becase the user specified the NX or XX option but the condition was not met.

Examples

redis>  SET mykey “Hello”

OK

redis>  GET mykey

"Hello"

Patterns

The command SET resource-name anystring NX EX max-lock-time is a simple way to implement a locking system with Redis.

A client can acquire the lock if the above command returns OK (or retry after some time if the command returns Nil), and remove the lock just using DEL.

The lock will be auto-released after the expire time is reached.

It is possible to make this system more robust modifying the unlock schema as follows:

  • Instead of setting a fixed string, set a non-guessable large random string, called token.
  • Instead of releasing the lock with DEL, send a script that only removes the key if the value matches.

This avoids that a client will try to release the lock after the expire time deleting the key created by another client that acquired the lock later.

An example of unlock script would be similar to the following:

if redis.call("get",KEYS[1]) == ARGV[1]
then
    return redis.call("del",KEYS[1])
else
    return 0
end

The script should be called with EVAL …script… 1 resource-name token-value

>SETBIT key offset value

Sets or clears the bit at offset in the string value stored at key

Available since 2.2.0.

Time complexity: O(1)

Sets or clears the bit at offset in the string value stored at key.

The bit is either set or cleared depending on value, which can be either 0 or 1. When key does not exist, a new string value is created. The string is grown to make sure it can hold a bit at offset. The offset argument is required to be greater than or equal to 0, and smaller than 232 (this limits bitmaps to 512MB). When the string at key is grown, added bits are set to 0.

Warning: When setting the last possible bit (offset equal to 232 -1) and the string value stored at key does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting bit number 232 -1 (512MB allocation) takes ~300ms, setting bit number 230 -1 (128MB allocation) takes ~80ms, setting bit number 228 -1 (32MB allocation) takes ~30ms and setting bit number 226 -1 (8MB allocation) takes ~8ms. Note that once this first allocation is done, subsequent calls to SETBIT for the same key will not have the allocation overhead.

Return value

Integer reply: the original bit value stored at offset.

Examples

redis>  SETBIT mykey 7 1

(integer) 0

redis>  SETBIT mykey 7 0

(integer) 1

redis>  GET mykey

"\u0000"

>SETEX key seconds value

Set the value and expiration of a key

Available since 2.0.0.

Time complexity: O(1)

Set key to hold the string value and set key to timeout after a given number of seconds. This command is equivalent to executing the following commands:

SET mykey value
EXPIRE mykey seconds

SETEX is atomic, and can be reproduced by using the previous two commands inside an MULTI / EXEC block. It is provided as a faster alternative to the given sequence of operations, because this operation is very common when Redis is used as a cache.

An error is returned when seconds is invalid.

Return value

Simple string reply

Examples

redis>  SETEX mykey 10 “Hello”

OK

redis>  TTL mykey

(integer) 10

redis>  GET mykey

"Hello"

>SETNX key value

Set the value of a key, only if the key does not exist

Available since 1.0.0.

Time complexity: O(1)

Set key to hold string value if key does not exist. In that case, it is equal to SET. When key already holds a value, no operation is performed. SETNX is short for “SET if N ot e X ists”.

Return value

Integer reply, specifically:

  • 1 if the key was set
  • 0 if the key was not set

Examples

redis>  SETNX mykey “Hello”

(integer) 1

redis>  SETNX mykey “World”

(integer) 0

redis>  GET mykey

"Hello"

Design pattern: Locking with SETNX

NOTE: Starting with Redis 2.6.12 it is possible to create a much simpler locking primitive using the SET command to acquire the lock, and a simple Lua script to release the lock. The pattern is documented in the SET command page.

The old SETNX based pattern is documented below for historical reasons.

SETNX can be used as a locking primitive. For example, to acquire the lock of the key foo, the client could try the following:

SETNX lock.foo &lt;current Unix time + lock timeout + 1>

If SETNX returns 1 the client acquired the lock, setting the lock.foo key to the Unix time at which the lock should no longer be considered valid. The client will later use DEL lock.foo in order to release the lock.

If SETNX returns 0 the key is already locked by some other client. We can either return to the caller if it’s a non blocking lock, or enter a loop retrying to hold the lock until we succeed or some kind of timeout expires.

Handling deadlocks

In the above locking algorithm there is a problem: what happens if a client fails, crashes, or is otherwise not able to release the lock? It’s possible to detect this condition because the lock key contains a UNIX timestamp. If such a timestamp is equal to the current Unix time the lock is no longer valid.

When this happens we can’t just call DEL against the key to remove the lock and then try to issue a SETNX, as there is a race condition here, when multiple clients detected an expired lock and are trying to release it.

  • C1 and C2 read lock.foo to check the timestamp, because they both received 0 after executing SETNX, as the lock is still held by C3 that crashed after holding the lock.
  • C1 sends DEL lock.foo
  • C1 sends SETNX lock.foo and it succeeds
  • C2 sends DEL lock.foo
  • C2 sends SETNX lock.foo and it succeeds
  • ERROR: both C1 and C2 acquired the lock because of the race condition.

Fortunately, it’s possible to avoid this issue using the following algorithm. Let’s see how C4, our sane client, uses the good algorithm:

  • C4 sends SETNX lock.foo in order to acquire the lock

  • The crashed client C3 still holds it, so Redis will reply with 0 to C4.

  • C4 sends GET lock.foo to check if the lock expired. If it is not, it will sleep for some time and retry from the start.

  • Instead, if the lock is expired because the Unix time at lock.foo is older than the current Unix time, C4 tries to perform:

GETSET lock.foo &lt;current Unix timestamp + lock timeout + 1>
  • Because of the GETSET semantic, C4 can check if the old value stored at key is still an expired timestamp. If it is, the lock was acquired.

  • If another client, for instance C5, was faster than C4 and acquired the lock with the GETSET operation, the C4 GETSET operation will return a non expired timestamp. C4 will simply restart from the first step. Note that even if C4 set the key a bit a few seconds in the future this is not a problem.

Important note: In order to make this locking algorithm more robust, a client holding a lock should always check the timeout didn’t expire before unlocking the key with DEL because client failures can be complex, not just crashing but also blocking a lot of time against some operations and trying to issue DEL after a lot of time (when the LOCK is already held by another client).

>SETRANGE key offset value

Overwrite part of a string at key starting at the specified offset

Available since 2.2.0.

Time complexity: O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the value argument.

Overwrites part of the string stored at key, starting at the specified offset, for the entire length of value. If the offset is larger than the current length of the string at key, the string is padded with zero-bytes to make offset fit. Non-existing keys are considered as empty strings, so this command will make sure it holds a string large enough to be able to set value at offset.

Note that the maximum offset that you can set is 229 -1 (536870911), as Redis Strings are limited to 512 megabytes. If you need to grow beyond this size, you can use multiple keys.

Warning: When setting the last possible byte and the string value stored at key does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number 8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is done, subsequent calls to SETRANGE for the same key will not have the allocation overhead.

Patterns

Thanks to SETRANGE and the analogous GETRANGE commands, you can use Redis strings as a linear array with O(1) random access. This is a very fast and efficient storage in many real world use cases.

Return value

Integer reply: the length of the string after it was modified by the command.

Examples

Basic usage:

redis>  SET key1 “Hello World”

OK

redis>  SETRANGE key1 6 “Redis”

(integer) 11

redis>  GET key1

"Hello Redis"

Example of zero padding:

redis>  SETRANGE key2 6 “Redis”

(integer) 11

redis>  GET key2

"\u0000\u0000\u0000\u0000\u0000\u0000Redis"

>STRLEN key

Get the length of the value stored in a key

Available since 2.2.0.

Time complexity: O(1)

Returns the length of the string value stored at key. An error is returned when key holds a non-string value.

Return value

Integer reply: the length of the string at key, or 0 when key does not exist.

Examples

redis>  SET mykey “Hello world”

OK

redis>  STRLEN mykey

(integer) 11

redis>  STRLEN nonexisting

(integer) 0

connection

>AUTH password

Authenticate to the server

Available since 1.0.0.

Request for authentication in a password-protected Redis server. Redis can be instructed to require a password before allowing clients to execute commands. This is done using the requirepass directive in the configuration file.

If password matches the password in the configuration file, the server replies with the OK status code and starts accepting commands. Otherwise, an error is returned and the clients needs to try a new password.

Note: because of the high performance nature of Redis, it is possible to try a lot of passwords in parallel in very short time, so make sure to generate a strong and very long password so that this attack is infeasible.

Return value

Simple string reply

>ECHO message

Echo the given string

Available since 1.0.0.

Returns message.

Return value

Bulk string reply

Examples

redis>  ECHO “Hello World!”

"Hello World!"

>PING

Ping the server

Available since 1.0.0.

Returns PONG. This command is often used to test if a connection is still alive, or to measure latency.

Return value

Simple string reply

Examples

redis>  PING

PONG

>QUIT

Close the connection

Available since 1.0.0.

Ask the server to close the connection. The connection is closed as soon as all pending replies have been written to the client.

Return value

Simple string reply: always OK.

>SELECT index

Change the selected database for the current connection

Available since 1.0.0.

Select the DB with having the specified zero-based numeric index. New connections always use DB 0.

Return value

Simple string reply

server

>BGREWRITEAOF

Asynchronously rewrite the append-only file

Available since 1.0.0.

Instruct Redis to start an Append Only File rewrite process. The rewrite will create a small optimized version of the current Append Only File.

If BGREWRITEAOF fails, no data gets lost as the old AOF will be untouched.

The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically:

  • If a Redis child is creating a snapshot on disk, the AOF rewrite is scheduled but not started until the saving child producing the RDB file terminates. In this case the BGREWRITEAOF will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the INFO command as of Redis 2.6.
  • If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time.

Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the BGREWRITEAOF command can be used to trigger a rewrite at any time.

Please refer to the persistence documentation for detailed information.

Return value

Simple string reply: always OK.

>BGSAVE

Asynchronously save the dataset to disk

Available since 1.0.0.

Save the DB in background. The OK code is immediately returned. Redis forks, the parent continues to serve the clients, the child saves the DB on disk then exits. A client may be able to check if the operation succeeded using the LASTSAVE command.

Please refer to the persistence documentation for detailed information.

Return value

Simple string reply

>CLIENT KILL [ip:port] [ID client-id] [TYPE normal|slave|pubsub] [ADDR ip:port] [SKIPME yes/no]

Kill the connection of a client

Available since 2.4.0.

Time complexity: O(N) where N is the number of client connections

The CLIENT KILL command closes a given client connection. Up to Redis 2.8.11 it was possible to close a connection only by client address, using the following form:

CLIENT KILL addr:port

The ip:port should match a line returned by the CLIENT LIST command (addr field).

However starting with Redis 2.8.12 or greater, the command accepts the following form:

CLIENT KILL &lt;filter> &lt;value> ... ... &lt;filter> &lt;value>

With the new form it is possible to kill clients by different attributes instead of killing just by address. The following filters are available:

  • CLIENT KILL ADDR ip:port. This is exactly the same as the old three-arguments behavior.
  • CLIENT KILL ID client-id. Allows to kill a client by its unique ID field, which was introduced in the CLIENT LIST command starting from Redis 2.8.12.
  • CLIENT KILL TYPE type, where type is one of normal, slave, pubsub. This closes the connections of all the clients in the specified class. Note that clients blocked into the MONITOR command are considered to belong to the normal class.
  • CLIENT KILL SKIPME yes/no. By default this option is set to yes, that is, the client calling the command will not get killed, however setting this option to no will have the effect of also killing the client calling the command.

It is possible to provide multiple filters at the same time. The command will handle multiple filters via logical AND. For example:

CLIENT KILL addr 127.0.0.1:6379 type slave

is valid and will kill only a slaves with the specified address. This format containing multiple filters is rarely useful currently.

When the new form is used the command no longer returns OK or an error, but instead the number of killed clients, that may be zero.

CLIENT KILL and Redis Sentinel

Recent versions of Redis Sentinel (Redis 2.8.12 or greater) use CLIENT KILL in order to kill clients when an instance is reconfigured, in order to force clients to perform the handshake with one Sentinel again and update its configuration.

Notes

Due to the single-treaded nature of Redis, it is not possible to kill a client connection while it is executing a command. From the client point of view, the connection can never be closed in the middle of the execution of a command. However, the client will notice the connection has been closed only when the next command is sent (and results in network error).

Return value

When called with the three arguments format:

Simple string reply: OK if the connection exists and has been closed

When called with the filter / value format:

Integer reply: the number of clients killed.

>CLIENT LIST

Get the list of client connections

Available since 2.4.0.

Time complexity: O(N) where N is the number of client connections

The CLIENT LIST command returns information and statistics about the client connections server in a mostly human readable format.

Return value

Bulk string reply: a unique string, formatted as follows:

  • One client connection per line (separated by LF)
  • Each line is composed of a succession of property=value fields separated by a space character.

Here is the meaning of the fields:

  • id: an unique 64-bit client ID (introduced in Redis 2.8.12).
  • addr: address/port of the client
  • fd: file descriptor corresponding to the socket
  • age: total duration of the connection in seconds
  • idle: idle time of the connection in seconds
  • flags: client flags (see below)
  • db: current database ID
  • sub: number of channel subscriptions
  • psub: number of pattern matching subscriptions
  • multi: number of commands in a MULTI/EXEC context
  • qbuf: query buffer length (0 means no query pending)
  • qbuf-free: free space of the query buffer (0 means the buffer is full)
  • obl: output buffer length
  • oll: output list length (replies are queued in this list when the buffer is full)
  • omem: output buffer memory usage
  • events: file descriptor events (see below)
  • cmd: last command played

The client flags can be a combination of:

O: the client is a slave in MONITOR mode
S: the client is a normal slave server
M: the client is a master
x: the client is in a MULTI/EXEC context
b: the client is waiting in a blocking operation
i: the client is waiting for a VM I/O (deprecated)
d: a watched keys has been modified - EXEC will fail
c: connection to be closed after writing entire reply
u: the client is unblocked
A: connection to be closed ASAP
N: no specific flag set

The file descriptor events can be:

r: the client socket is readable (event loop)
w: the client socket is writable (event loop)

Notes

New fields are regularly added for debugging purpose. Some could be removed in the future. A version safe Redis client using this command should parse the output accordingly (i.e. handling gracefully missing fields, skipping unknown fields).

>CLIENT GETNAME

Get the current connection name

Available since 2.6.9.

Time complexity: O(1)

The CLIENT GETNAME returns the name of the current connection as set by CLIENT SETNAME. Since every new connection starts without an associated name, if no name was assigned a null bulk reply is returned.

Return value

Bulk string reply: The connection name, or a null bulk reply if no name is set.

>CLIENT PAUSE timeout

Stop processing commands from clients for some time

Available since 2.9.50.

Time complexity: O(1)

CLIENT PAUSE is a connections control command able to suspend all the Redis clients for the specified amount of time (in milliseconds).

The command performs the following actions:

  • It stops processing all the pending commands from normal and pub/sub clients. However interactions with slaves will continue normally.
  • However it returns OK to the caller ASAP, so the CLIENT PAUSE command execution is not paused by itself.
  • When the specified amount of time has elapsed, all the clients are unblocked: this will trigger the processing of all the commands accumulated in the query buffer of every client during the pause.

This command is useful as it makes able to switch clients from a Redis instance to another one in a controlled way. For example during an instance upgrade the system administrator could do the following:

  • Pause the clients using CLIENT PAUSE
  • Wait a few seconds to make sure the slaves processed the latest replication stream from the master.
  • Turn one of the slaves into a master.
  • Reconfigure clients to connect with the new master.

It is possible to send CLIENT PAUSE in a MULTI/EXEC block together with the INFO replication command in order to get the current master offset at the time the clients are blocked. This way it is possible to wait for a specific offset in the slave side in order to make sure all the replication stream was processed.

Return value

Simple string reply: The command returns OK or an error if the timeout is invalid.

>CLIENT SETNAME connection-name

Set the current connection name

Available since 2.6.9.

Time complexity: O(1)

The CLIENT SETNAME command assigns a name to the current connection.

The assigned name is displayed in the output of CLIENT LIST so that it is possible to identify the client that performed a given connection.

For instance when Redis is used in order to implement a queue, producers and consumers of messages may want to set the name of the connection according to their role.

There is no limit to the length of the name that can be assigned if not the usual limits of the Redis string type (512 MB). However it is not possible to use spaces in the connection name as this would violate the format of the CLIENT LIST reply.

It is possible to entirely remove the connection name setting it to the empty string, that is not a valid connection name since it serves to this specific purpose.

The connection name can be inspected using CLIENT GETNAME.

Every new connection starts without an assigned name.

Tip: setting names to connections is a good way to debug connection leaks due to bugs in the application using Redis.

Return value

Simple string reply: OK if the connection name was successfully set.

>CLUSTER SLOTS

Get array of Cluster slot to node mappings

Available since 3.0.0.

Time complexity: O(N) where N is the total number of Cluster nodes

Returns Array reply of current cluster state.

CLUSTER SLOTS returns details about which cluster slots map to which Redis instances.

Nested Result Array

Each nested result is:

  • Start slot range
  • End slot range
  • Master for slot range represented as nested IP/Port array* First replica of master for slot range
  • Second replica
  • …continues until all replicas for this master are returned.

Each result includes all active replicas of the master instance for the listed slot range. Failed replicas are not returned.

The third nested reply is guaranteed to be the IP/Port pair of the master instance for the slot range. All IP/Port pairs after the third nested reply are replicas of the master.

If a cluster instance has non-contiguous slots (e.g. 1-400,900,1800-6000) then master and replica IP/Port results will be duplicated for each top-level slot range reply.

Sample Output

127.0.0.1:7001> cluster slots
1) 1) (integer) 0
   2) (integer) 4095
   3) 1) "127.0.0.1"
      2) (integer) 7000
   4) 1) "127.0.0.1"
      2) (integer) 7004
2) 1) (integer) 12288
   2) (integer) 16383
   3) 1) "127.0.0.1"
      2) (integer) 7003
   4) 1) "127.0.0.1"
      2) (integer) 7007
3) 1) (integer) 4096
   2) (integer) 8191
   3) 1) "127.0.0.1"
      2) (integer) 7001
   4) 1) "127.0.0.1"
      2) (integer) 7005
4) 1) (integer) 8192
   2) (integer) 12287
   3) 1) "127.0.0.1"
      2) (integer) 7002
   4) 1) "127.0.0.1"
      2) (integer) 7006

Return value

Array reply: nested list of slot ranges with IP/Port mappings.

>COMMAND

Get array of Redis command details

Available since 2.8.13.

Time complexity: O(N) where N is the total number of Redis commands

Returns Array reply of details about all Redis commands.

Cluster clients must be aware of key positions in commands so commands can go to matching instances, but Redis commands vary between accepting one key, multiple keys, or even multiple keys separated by other data.

You can use COMMAND to cache a mapping between commands and key positions for each command to enable exact routing of commands to cluster instances.

Nested Result Array

Each top-level result contains six nested results. Each nested result is:

  • command name
  • command arity specification
  • nested Array reply of command flags
  • position of first key in argument list
  • position of last key in argument list
  • step count for locating repeating keys

Command Name

Command name is the command returned as a lowercase string.

Command Arity

    <table style="width:50%"><tr>

Command arity follows a simple pattern:

  • positive if command has fixed number of required arguments.
  • negative if command has minimum number of required arguments, but may have more.

Command arity includes counting the command name itself.

Examples:

  • GET arity is 2 since the command only accepts one argument and always has the format GET key.
  • MGET arity is -2 since the command accepts at a minimum one argument, but up to an unlimited number: MGET key1 [key2] [key3] ….

Also note with MGET, the -1 value for “last key position” means the list of keys may have unlimited length.

Flags

Command flags is Array reply containing one or more status replies:

  • write - command may result in modifications
  • readonly - command will never modify keys
  • denyoom - reject command if currently OOM
  • admin - server admin command
  • pubsub - pubsub-related command
  • noscript - deny this command from scripts
  • random - command has random results, dangerous for scripts
  • sort_for_script - if called from script, sort output
  • loading - allow command while database is loading
  • stale - allow command while replica has stale data
  • skip_monitor - do not show this command in MONITOR
  • asking - cluster related - accept even if importing
  • fast - command operates in constant or log(N) time. Used for latency monitoring.
  • movablekeys - keys have no pre-determined position. You must discover keys yourself.

Movable Keys

1) 1) "sort"
   2) (integer) -2
   3) 1) write
      2) denyoom
      3) movablekeys
   4) (integer) 1
   5) (integer) 1
   6) (integer) 1

Some Redis commands have no predetermined key locations. For those commands, flag movablekeys is added to the command flags Array reply. Your Redis Cluster client needs to parse commands marked movabkeleys to locate all relevant key positions.

Complete list of commands currently requiring key location parsing:

  • SORT - optional STORE key, optional BY weights, optional GET keys
  • ZUNIONSTORE - keys stop when WEIGHT or AGGREGATE starts
  • ZINTERSTORE - keys stop when WEIGHT or AGGREGATE starts
  • EVAL - keys stop after numkeys count arguments
  • EVALSHA - keys stop after numkeys count arguments

Also see COMMAND GETKEYS for getting your Redis server tell you where keys are in any given command.

First Key in Argument List

For most commands the first key is position 1. Position 0 is always the command name itself.

Last Key in Argument List

Redis commands usually accept one key, two keys, or an unlimited number of keys.

If a command accepts one key, the first key and last key positions is 1.

If a command accepts two keys (e.g. BRPOPLPUSH, SMOVE, RENAME, …) then the last key position is the location of the last key in the argument list.

If a command accepts an unlimited number of keys, the last key position is -1.

Step Count

    <table style="width:50%"><tr>

Key step count allows us to find key positions in commands like MSET where the format is MSET key1 val1 [key2] [val2] [key3] [val3]….

In the case of MSET, keys are every other position so the step value is 2. Compare with MGET above where the step value is just 1.

Return value

Array reply: nested list of command details. Commands are returned in random order.

Examples

redis>  COMMAND

1) 1) "pfcount"
     2) (integer) -2
     3) 1) write
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
  2) 1) "command"
     2) (integer) 0
     3) 1) readonly
        2) loading
        3) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
  3) 1) "zscan"
     2) (integer) -3
     3) 1) readonly
        2) random
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
  4) 1) "echo"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
  5) 1) "select"
     2) (integer) 2
     3) 1) readonly
        2) loading
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
  6) 1) "zcount"
     2) (integer) 4
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
  7) 1) "substr"
     2) (integer) 4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
  8) 1) "pttl"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
  9) 1) "hincrbyfloat"
     2) (integer) 4
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 10) 1) "hlen"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 11) 1) "incrby"
     2) (integer) 3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 12) 1) "setex"
     2) (integer) 4
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 13) 1) "persist"
     2) (integer) 2
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 14) 1) "setbit"
     2) (integer) 4
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 15) 1) "info"
     2) (integer) -1
     3) 1) readonly
        2) loading
        3) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 16) 1) "scard"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 17) 1) "srandmember"
     2) (integer) -2
     3) 1) readonly
        2) random
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 18) 1) "lrem"
     2) (integer) 4
     3) 1) write
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 19) 1) "append"
     2) (integer) 3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 20) 1) "hgetall"
     2) (integer) 2
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 21) 1) "zincrby"
     2) (integer) 4
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 22) 1) "rpop"
     2) (integer) 2
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 23) 1) "cluster"
     2) (integer) -2
     3) 1) readonly
        2) admin
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 24) 1) "ltrim"
     2) (integer) 4
     3) 1) write
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 25) 1) "flushdb"
     2) (integer) 1
     3) 1) write
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 26) 1) "rpoplpush"
     2) (integer) 3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 2
     6) (integer) 1
 27) 1) "expire"
     2) (integer) 3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 28) 1) "psync"
     2) (integer) 3
     3) 1) readonly
        2) admin
        3) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 29) 1) "zremrangebylex"
     2) (integer) 4
     3) 1) write
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 30) 1) "pubsub"
     2) (integer) -2
     3) 1) readonly
        2) pubsub
        3) random
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 31) 1) "setnx"
     2) (integer) 3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 32) 1) "pexpireat"
     2) (integer) 3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 33) 1) "psubscribe"
     2) (integer) -2
     3) 1) readonly
        2) pubsub
        3) noscript
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 34) 1) "zrevrange"
     2) (integer) -4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 35) 1) "hmget"
     2) (integer) -3
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 36) 1) "object"
     2) (integer) -2
     3) 1) readonly
     4) (integer) 2
     5) (integer) 2
     6) (integer) 2
 37) 1) "watch"
     2) (integer) -2
     3) 1) readonly
        2) noscript
        3) fast
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
 38) 1) "setrange"
     2) (integer) 4
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 39) 1) "sdiffstore"
     2) (integer) -3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
 40) 1) "flushall"
     2) (integer) 1
     3) 1) write
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 41) 1) "sadd"
     2) (integer) -3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 42) 1) "renamenx"
     2) (integer) 3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 2
     6) (integer) 1
 43) 1) "zrangebyscore"
     2) (integer) -4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 44) 1) "bitop"
     2) (integer) -4
     3) 1) write
        2) denyoom
     4) (integer) 2
     5) (integer) -1
     6) (integer) 1
 45) 1) "get"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 46) 1) "hmset"
     2) (integer) -4
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 47) 1) "type"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 48) 1) "evalsha"
     2) (integer) -3
     3) 1) noscript
        2) movablekeys
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 49) 1) "zrevrangebyscore"
     2) (integer) -4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 50) 1) "set"
     2) (integer) -3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 51) 1) "getset"
     2) (integer) 3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 52) 1) "punsubscribe"
     2) (integer) -1
     3) 1) readonly
        2) pubsub
        3) noscript
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 53) 1) "publish"
     2) (integer) 3
     3) 1) readonly
        2) pubsub
        3) loading
        4) stale
        5) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 54) 1) "lset"
     2) (integer) 4
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 55) 1) "rename"
     2) (integer) 3
     3) 1) write
     4) (integer) 1
     5) (integer) 2
     6) (integer) 1
 56) 1) "bgsave"
     2) (integer) 1
     3) 1) readonly
        2) admin
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 57) 1) "decrby"
     2) (integer) 3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 58) 1) "sunion"
     2) (integer) -2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
 59) 1) "blpop"
     2) (integer) -3
     3) 1) write
        2) noscript
     4) (integer) 1
     5) (integer) -2
     6) (integer) 1
 60) 1) "zrem"
     2) (integer) -3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 61) 1) "readonly"
     2) (integer) 1
     3) 1) readonly
        2) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 62) 1) "exists"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 63) 1) "linsert"
     2) (integer) 5
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 64) 1) "lindex"
     2) (integer) 3
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 65) 1) "scan"
     2) (integer) -2
     3) 1) readonly
        2) random
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 66) 1) "migrate"
     2) (integer) -6
     3) 1) write
        2) admin
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 67) 1) "ping"
     2) (integer) 1
     3) 1) readonly
        2) stale
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 68) 1) "zunionstore"
     2) (integer) -4
     3) 1) write
        2) denyoom
        3) movablekeys
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 69) 1) "latency"
     2) (integer) -2
     3) 1) readonly
        2) admin
        3) noscript
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 70) 1) "role"
     2) (integer) 1
     3) 1) admin
        2) noscript
        3) loading
        4) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 71) 1) "ttl"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 72) 1) "del"
     2) (integer) -2
     3) 1) write
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
 73) 1) "wait"
     2) (integer) 3
     3) 1) readonly
        2) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 74) 1) "zscore"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 75) 1) "zrevrangebylex"
     2) (integer) -4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 76) 1) "sscan"
     2) (integer) -3
     3) 1) readonly
        2) random
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 77) 1) "incrbyfloat"
     2) (integer) 3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 78) 1) "decr"
     2) (integer) 2
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 79) 1) "getbit"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 80) 1) "spop"
     2) (integer) 2
     3) 1) write
        2) noscript
        3) random
        4) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 81) 1) "hkeys"
     2) (integer) 2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 82) 1) "pfmerge"
     2) (integer) -2
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
 83) 1) "zrange"
     2) (integer) -4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 84) 1) "monitor"
     2) (integer) 1
     3) 1) readonly
        2) admin
        3) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 85) 1) "zinterstore"
     2) (integer) -4
     3) 1) write
        2) denyoom
        3) movablekeys
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 86) 1) "rpushx"
     2) (integer) 3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 87) 1) "llen"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 88) 1) "hincrby"
     2) (integer) 4
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 89) 1) "save"
     2) (integer) 1
     3) 1) readonly
        2) admin
        3) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 90) 1) "zremrangebyrank"
     2) (integer) 4
     3) 1) write
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 91) 1) "auth"
     2) (integer) 2
     3) 1) readonly
        2) noscript
        3) loading
        4) stale
        5) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 92) 1) "zcard"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 93) 1) "psetex"
     2) (integer) 4
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 94) 1) "shutdown"
     2) (integer) -1
     3) 1) readonly
        2) admin
        3) loading
        4) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 95) 1) "sync"
     2) (integer) 1
     3) 1) readonly
        2) admin
        3) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 96) 1) "dbsize"
     2) (integer) 1
     3) 1) readonly
        2) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 97) 1) "expireat"
     2) (integer) 3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
 98) 1) "subscribe"
     2) (integer) -2
     3) 1) readonly
        2) pubsub
        3) noscript
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
 99) 1) "brpop"
     2) (integer) -3
     3) 1) write
        2) noscript
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
100) 1) "sort"
     2) (integer) -2
     3) 1) write
        2) denyoom
        3) movablekeys
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
101) 1) "sunionstore"
     2) (integer) -3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
102) 1) "zrangebylex"
     2) (integer) -4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
103) 1) "zlexcount"
     2) (integer) 4
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
104) 1) "lpush"
     2) (integer) -3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
105) 1) "incr"
     2) (integer) 2
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
106) 1) "mget"
     2) (integer) -2
     3) 1) readonly
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
107) 1) "getrange"
     2) (integer) 4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
108) 1) "slaveof"
     2) (integer) 3
     3) 1) admin
        2) noscript
        3) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
109) 1) "bitpos"
     2) (integer) -3
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
110) 1) "rpush"
     2) (integer) -3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
111) 1) "config"
     2) (integer) -2
     3) 1) readonly
        2) admin
        3) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
112) 1) "srem"
     2) (integer) -3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
113) 1) "mset"
     2) (integer) -3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) -1
     6) (integer) 2
114) 1) "lrange"
     2) (integer) 4
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
115) 1) "replconf"
     2) (integer) -1
     3) 1) readonly
        2) admin
        3) noscript
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
116) 1) "hsetnx"
     2) (integer) 4
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
117) 1) "discard"
     2) (integer) 1
     3) 1) readonly
        2) noscript
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
118) 1) "pexpire"
     2) (integer) 3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
119) 1) "pfdebug"
     2) (integer) -3
     3) 1) write
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
120) 1) "asking"
     2) (integer) 1
     3) 1) readonly
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
121) 1) "client"
     2) (integer) -2
     3) 1) readonly
        2) admin
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
122) 1) "pfselftest"
     2) (integer) 1
     3) 1) readonly
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
123) 1) "bgrewriteaof"
     2) (integer) 1
     3) 1) readonly
        2) admin
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
124) 1) "zremrangebyscore"
     2) (integer) 4
     3) 1) write
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
125) 1) "sinterstore"
     2) (integer) -3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
126) 1) "lpushx"
     2) (integer) 3
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
127) 1) "restore"
     2) (integer) -4
     3) 1) write
        2) denyoom
        3) admin
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
128) 1) "unsubscribe"
     2) (integer) -1
     3) 1) readonly
        2) pubsub
        3) noscript
        4) loading
        5) stale
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
129) 1) "zrank"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
130) 1) "readwrite"
     2) (integer) 1
     3) 1) readonly
        2) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
131) 1) "hget"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
132) 1) "bitcount"
     2) (integer) -2
     3) 1) readonly
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
133) 1) "randomkey"
     2) (integer) 1
     3) 1) readonly
        2) random
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
134) 1) "restore-asking"
     2) (integer) -4
     3) 1) write
        2) denyoom
        3) admin
        4) asking
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
135) 1) "time"
     2) (integer) 1
     3) 1) readonly
        2) random
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
136) 1) "zrevrank"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
137) 1) "hset"
     2) (integer) 4
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
138) 1) "sinter"
     2) (integer) -2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
139) 1) "dump"
     2) (integer) 2
     3) 1) readonly
        2) admin
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
140) 1) "move"
     2) (integer) 3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
141) 1) "strlen"
     2) (integer) 2
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
142) 1) "unwatch"
     2) (integer) 1
     3) 1) readonly
        2) noscript
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
143) 1) "lpop"
     2) (integer) 2
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
144) 1) "smembers"
     2) (integer) 2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
145) 1) "msetnx"
     2) (integer) -3
     3) 1) write
        2) denyoom
     4) (integer) 1
     5) (integer) -1
     6) (integer) 2
146) 1) "pfadd"
     2) (integer) -2
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
147) 1) "zadd"
     2) (integer) -4
     3) 1) write
        2) denyoom
        3) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
148) 1) "lastsave"
     2) (integer) 1
     3) 1) readonly
        2) random
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
149) 1) "exec"
     2) (integer) 1
     3) 1) noscript
        2) skip_monitor
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
150) 1) "sismember"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
151) 1) "debug"
     2) (integer) -2
     3) 1) admin
        2) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
152) 1) "slowlog"
     2) (integer) -2
     3) 1) readonly
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
153) 1) "hexists"
     2) (integer) 3
     3) 1) readonly
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
154) 1) "eval"
     2) (integer) -3
     3) 1) noscript
        2) movablekeys
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
155) 1) "smove"
     2) (integer) 4
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 2
     6) (integer) 1
156) 1) "multi"
     2) (integer) 1
     3) 1) readonly
        2) noscript
        3) fast
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
157) 1) "sdiff"
     2) (integer) -2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 1
     5) (integer) -1
     6) (integer) 1
158) 1) "hscan"
     2) (integer) -3
     3) 1) readonly
        2) random
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
159) 1) "brpoplpush"
     2) (integer) 4
     3) 1) write
        2) denyoom
        3) noscript
     4) (integer) 1
     5) (integer) 2
     6) (integer) 1
160) 1) "script"
     2) (integer) -2
     3) 1) readonly
        2) admin
        3) noscript
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
161) 1) "keys"
     2) (integer) 2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
162) 1) "hdel"
     2) (integer) -3
     3) 1) write
        2) fast
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1
163) 1) "hvals"
     2) (integer) 2
     3) 1) readonly
        2) sort_for_script
     4) (integer) 1
     5) (integer) 1
     6) (integer) 1

>COMMAND COUNT

Get total number of Redis commands

Available since 2.8.13.

Time complexity: O(1)

Returns Integer reply of number of total commands in this Redis server.

Return value

Integer reply: number of commands returned by COMMAND

Examples

redis>  COMMAND COUNT

(integer) 163

>COMMAND GETKEYS

Extract keys given a full Redis command

Available since 2.8.13.

Time complexity: O(N) where N is the number of arugments to the command

Returns Array reply of keys from a full Redis command.

COMMAND GETKEYS is a helper command to let you find the keys from a full Redis command.

COMMAND shows some commands as having movablekeys meaning the entire command must be parsed to discover storage or retrieval keys. You can use COMMAND GETKEYS to discover key positions directly from how Redis parses the commands.

Return value

Array reply: list of keys from your command.

Examples

redis>  COMMAND GETKEYS MSET a b c d e f

1) "a"
2) "c"
3) "e"

redis>  COMMAND GETKEYS EVAL “not consulted” 3 key1 key2 key3 arg1 arg2 arg3 argN

1) "key1"
2) "key2"
3) "key3"

redis>  COMMAND GETKEYS SORT mylist ALPHA STORE outlist

1) "mylist"

>COMMAND INFO command-name [command-name …]

Get array of specific Redis command details

Available since 2.8.13.

Time complexity: O(N) when N is number of commands to look up

Returns Array reply of details about multiple Redis commands.

Same result format as COMMAND except you can specify which commands get returned.

If you request details about non-existing commands, their return position will be nil.

Return value

Array reply: nested list of command details.

Examples

redis>  COMMAND INFO get set eval

1) 1) "get"
   2) (integer) 2
   3) 1) readonly
      2) fast
   4) (integer) 1
   5) (integer) 1
   6) (integer) 1
2) 1) "set"
   2) (integer) -3
   3) 1) write
      2) denyoom
   4) (integer) 1
   5) (integer) 1
   6) (integer) 1
3) 1) "eval"
   2) (integer) -3
   3) 1) noscript
      2) movablekeys
   4) (integer) 0
   5) (integer) 0
   6) (integer) 0

redis>  COMMAND INFO foo evalsha config bar

1) (nil)
2) 1) "evalsha"
   2) (integer) -3
   3) 1) noscript
      2) movablekeys
   4) (integer) 0
   5) (integer) 0
   6) (integer) 0
3) 1) "config"
   2) (integer) -2
   3) 1) readonly
      2) admin
      3) stale
   4) (integer) 0
   5) (integer) 0
   6) (integer) 0
4) (nil)

>CONFIG GET parameter

Get the value of a configuration parameter

Available since 2.0.0.

The CONFIG GET command is used to read the configuration parameters of a running Redis server. Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 can read the whole configuration of a server using this command.

The symmetric command used to alter the configuration at run time is CONFIG SET.

CONFIG GET takes a single argument, which is a glob-style pattern. All the configuration parameters matching this parameter are reported as a list of key-value pairs. Example:

redis> config get *max-*-entries* 1) "hash-max-zipmap-entries"
2) "512"
3) "list-max-ziplist-entries"
4) "512"
5) "set-max-intset-entries"
6) "512"

You can obtain a list of all the supported configuration parameters by typing CONFIG GET * in an open redis-cli prompt.

All the supported parameters have the same meaning of the equivalent configuration parameter used in the redis.conf file, with the following important differences:

  • Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb … and so forth), everything should be specified as a well-formed 64-bit integer, in the base unit of the configuration directive.
  • The save parameter is a single string of space-separated integers. Every pair of integers represent a seconds/modifications threshold.

For instance what in redis.conf looks like:

save 900 1
save 300 10

that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, will be reported by CONFIG GET as “900 1 300 10”.

Return value

The return type of the command is a Array reply.

>CONFIG REWRITE

Rewrite the configuration file with the in memory configuration

Available since 2.8.0.

The CONFIG REWRITE command rewrites the redis.conf file the server was started with, applying the minimal changes needed to make it reflecting the configuration currently used by the server, that may be different compared to the original one because of the use of the CONFIG SET command.

The rewrite is performed in a very conservative way:

  • Comments and the overall structure of the original redis.conf are preserved as much as possible.
  • If an option already exists in the old redis.conf file, it will be rewritten at the same position (line number).
  • If an option was not already present, but it is set to its default value, it is not added by the rewrite process.
  • If an option was not already present, but it is set to a non-default value, it is appended at the end of the file.
  • Non used lines are blanked. For instance if you used to have multiple save directives, but the current configuration has fewer or none as you disabled RDB persistence, all the lines will be blanked.

CONFIG REWRITE is also able to rewrite the configuration file from scratch if the original one no longer exists for some reason. However if the server was started without a configuration file at all, the CONFIG REWRITE will just return an error.

Atomic rewrite process

In order to make sure the redis.conf file is always consistent, that is, on errors or crashes you always end with the old file, or the new one, the rewrite is perforemd with a single write(2) call that has enough content to be at least as big as the old file. Sometimes additional padding in the form of comments is added in order to make sure the resulting file is big enough, and later the file gets truncated to remove the padding at the end.

Return value

Simple string reply: OK when the configuration was rewritten properly. Otherwise an error is returned.

>CONFIG SET parameter value

Set a configuration parameter to the given value

Available since 2.0.0.

The CONFIG SET command is used in order to reconfigure the server at run time without the need to restart Redis. You can change both trivial parameters or switch from one to another persistence option using this command.

The list of configuration parameters supported by CONFIG SET can be obtained issuing a CONFIG GET * command, that is the symmetrical command used to obtain information about the configuration of a running Redis instance.

All the configuration parameters set using CONFIG SET are immediately loaded by Redis and will take effect starting with the next command executed.

All the supported parameters have the same meaning of the equivalent configuration parameter used in the redis.conf file, with the following important differences:

  • Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb … and so forth), everything should be specified as a well-formed 64-bit integer, in the base unit of the configuration directive.
  • The save parameter is a single string of space-separated integers. Every pair of integers represent a seconds/modifications threshold.

For instance what in redis.conf looks like:

save 900 1
save 300 10

that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, should be set using CONFIG SET as “900 1 300 10”.

It is possible to switch persistence from RDB snapshotting to append-only file (and the other way around) using the CONFIG SET command. For more information about how to do that please check the persistence page.

In general what you should know is that setting the appendonly parameter to yes will start a background process to save the initial append-only file (obtained from the in memory data set), and will append all the subsequent commands on the append-only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start.

You can have both the AOF enabled with RDB snapshotting if you want, the two options are not mutually exclusive.

Return value

Simple string reply: OK when the configuration was set properly. Otherwise an error is returned.

>CONFIG RESETSTAT

Reset the stats returned by INFO

Available since 2.0.0.

Time complexity: O(1)

Resets the statistics reported by Redis using the INFO command.

These are the counters that are reset:

  • Keyspace hits
  • Keyspace misses
  • Number of commands processed
  • Number of connections received
  • Number of expired keys
  • Number of rejected connections
  • Latest fork(2) time
  • The aof_delayed_fsync counter

Return value

Simple string reply: always OK.

>DBSIZE

Return the number of keys in the selected database

Available since 1.0.0.

Return the number of keys in the currently-selected database.

Return value

Integer reply

>DEBUG OBJECT key

Get debugging information about a key

Available since 1.0.0.

DEBUG OBJECT is a debugging command that should not be used by clients. Check the OBJECT command instead.

Simple string reply

>DEBUG SEGFAULT

Make the server crash

Available since 1.0.0.

DEBUG SEGFAULT performs an invalid memory access that crashes Redis. It is used to simulate bugs during the development.

Simple string reply

>FLUSHALL

Remove all keys from all databases

Available since 1.0.0.

Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.

The time-complexity for this operation is O(N), N being the number of keys in the database.

Return value

Simple string reply

>FLUSHDB

Remove all keys from the current database

Available since 1.0.0.

Delete all the keys of the currently selected DB. This command never fails.

The time-complexity for this operation is O(N), N being the number of keys in the database.

Return value

Simple string reply

>INFO [section]

Get information and statistics about the server

Available since 1.0.0.

The INFO command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans.

The optional parameter can be used to select a specific section of information:

  • server: General information about the Redis server
  • clients: Client connections section
  • memory: Memory consumption related information
  • persistence: RDB and AOF related information
  • stats: General statistics
  • replication: Master/slave replication information
  • cpu: CPU consumption statistics
  • commandstats: Redis command statistics
  • cluster: Redis Cluster section
  • keyspace: Database related statistics

It can also take the following values:

  • all: Return all sections
  • default: Return only the default set of sections

When no parameter is provided, the default option is assumed.

Return value

Bulk string reply: as a collection of text lines.

Lines can contain a section name (starting with a # character) or a property. All the properties are in the form of field:value terminated by \r\n.

redis>  INFO

# Server
redis_version:2.9.999
redis_git_sha1:3bf72d0d
redis_git_dirty:0
redis_build_id:69b45658ca5a9e2d
redis_mode:standalone
os:Linux 3.13.7-x86_64-linode38 x86_64
arch_bits:32
multiplexing_api:epoll
gcc_version:4.4.1
process_id:14029
run_id:63bccba63aa231ac84b459af7a6ae34cb89caecd
tcp_port:6379
uptime_in_seconds:8955655
uptime_in_days:103
hz:10
lru_clock:4701576
config_file:/etc/redis/6379.conf

# Clients
connected_clients:8
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:41586784
used_memory_human:39.66M
used_memory_rss:50810880
used_memory_peak:48307064
used_memory_peak_human:46.07M
used_memory_lua:22528
mem_fragmentation_ratio:1.22
mem_allocator:jemalloc-3.6.0

# Persistence
loading:0
rdb_changes_since_last_save:900
rdb_bgsave_in_progress:0
rdb_last_save_time:1413987525
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:8044
total_commands_processed:37379126
instantaneous_ops_per_sec:4
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:36345
evicted_keys:0
keyspace_hits:9907672
keyspace_misses:2516471
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:17597
migrate_cached_sockets:0

# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:7645.42
used_cpu_user:7432.86
used_cpu_sys_children:344.51
used_cpu_user_children:3326.55

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=219224,expires=200029,avg_ttl=6574175353

Notes

Please note depending on the version of Redis some of the fields have been added or removed. A robust client application should therefore parse the result of this command by skipping unknown properties, and gracefully handle missing fields.

Here is the description of fields for Redis >= 2.4.

Here is the meaning of all fields in the server section:

  • redis_version: Version of the Redis server
  • redis_git_sha1: Git SHA1
  • redis_git_dirty: Git dirty flag
  • os: Operating system hosting the Redis server
  • arch_bits: Architecture (32 or 64 bits)
  • multiplexing_api: event loop mechanism used by Redis
  • gcc_version: Version of the GCC compiler used to compile the Redis server
  • process_id: PID of the server process
  • run_id: Random value identifying the Redis server (to be used by Sentinel and Cluster)
  • tcp_port: TCP/IP listen port
  • uptime_in_seconds: Number of seconds since Redis server start
  • uptime_in_days: Same value expressed in days
  • lru_clock: Clock incrementing every minute, for LRU management

Here is the meaning of all fields in the clients section:

  • connected_clients: Number of client connections (excluding connections from slaves)
  • client_longest_output_list: longest output list among current client connections
  • client_biggest_input_buf: biggest input buffer among current client connections
  • blocked_clients: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH)

Here is the meaning of all fields in the memory section:

  • used_memory: total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmalloc

  • used_memory_human: Human readable representation of previous value

  • used_memory_rss: Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). This is the number reported by tools such as top and ps.

  • used_memory_peak: Peak memory consumed by Redis (in bytes)

  • used_memory_peak_human: Human readable representation of previous value

  • used_memory_lua: Number of bytes used by the Lua engine

  • mem_fragmentation_ratio: Ratio between used_memory_rss and used_memory

  • mem_allocator: Memory allocator, chosen at compile time.

Ideally, the used_memory_rss value should be only slightly higher than used_memory. When rss » used, a large difference means there is memory fragmentation (internal or external), which can be evaluated by checking mem_fragmentation_ratio. When used » rss, it means part of Redis memory has been swapped off by the operating system: expect some significant latencies.

Because Redis does not have control over how its allocations are mapped to memory pages, high used_memory_rss is often the result of a spike in memory usage.

When Redis frees memory, the memory is given back to the allocator, and the allocator may or may not give the memory back to the system. There may be a discrepancy between the used_memory value and memory consumption as reported by the operating system. It may be due to the fact memory has been used and released by Redis, but not given back to the system. The used_memory_peak value is generally useful to check this point.

Here is the meaning of all fields in the persistence section:

  • loading: Flag indicating if the load of a dump file is on-going
  • rdb_changes_since_last_save: Number of changes since the last dump
  • rdb_bgsave_in_progress: Flag indicating a RDB save is on-going
  • rdb_last_save_time: Epoch-based timestamp of last successful RDB save
  • rdb_last_bgsave_status: Status of the last RDB save operation
  • rdb_last_bgsave_time_sec: Duration of the last RDB save operation in seconds
  • rdb_current_bgsave_time_sec: Duration of the on-going RDB save operation if any
  • aof_enabled: Flag indicating AOF logging is activated
  • aof_rewrite_in_progress: Flag indicating a AOF rewrite operation is on-going
  • aof_rewrite_scheduled: Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete.
  • aof_last_rewrite_time_sec: Duration of the last AOF rewrite operation in seconds
  • aof_current_rewrite_time_sec: Duration of the on-going AOF rewrite operation if any
  • aof_last_bgrewrite_status: Status of the last AOF rewrite operation

changes_since_last_save refers to the number of operations that produced some kind of changes in the dataset since the last time either SAVE or BGSAVE was called.

If AOF is activated, these additional fields will be added:

  • aof_current_size: AOF current file size
  • aof_base_size: AOF file size on latest startup or rewrite
  • aof_pending_rewrite: Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete.
  • aof_buffer_length: Size of the AOF buffer
  • aof_rewrite_buffer_length: Size of the AOF rewrite buffer
  • aof_pending_bio_fsync: Number of fsync pending jobs in background I/O queue
  • aof_delayed_fsync: Delayed fsync counter

If a load operation is on-going, these additional fields will be added:

  • loading_start_time: Epoch-based timestamp of the start of the load operation
  • loading_total_bytes: Total file size
  • loading_loaded_bytes: Number of bytes already loaded
  • loading_loaded_perc: Same value expressed as a percentage
  • loading_eta_seconds: ETA in seconds for the load to be complete

Here is the meaning of all fields in the stats section:

  • total_connections_received: Total number of connections accepted by the server
  • total_commands_processed: Total number of commands processed by the server
  • instantaneous_ops_per_sec: Number of commands processed per second
  • rejected_connections: Number of connections rejected because of maxclients limit
  • expired_keys: Total number of key expiration events
  • evicted_keys: Number of evicted keys due to maxmemory limit
  • keyspace_hits: Number of successful lookup of keys in the main dictionary
  • keyspace_misses: Number of failed lookup of keys in the main dictionary
  • pubsub_channels: Global number of pub/sub channels with client subscriptions
  • pubsub_patterns: Global number of pub/sub pattern with client subscriptions
  • latest_fork_usec: Duration of the latest fork operation in microseconds

Here is the meaning of all fields in the replication section:

  • role: Value is “master” if the instance is slave of no one, or “slave” if the instance is enslaved to a master. Note that a slave can be master of another slave (daisy chaining).

If the instance is a slave, these additional fields are provided:

  • master_host: Host or IP address of the master
  • master_port: Master listening TCP port
  • master_link_status: Status of the link (up/down)
  • master_last_io_seconds_ago: Number of seconds since the last interaction with master
  • master_sync_in_progress: Indicate the master is SYNCing to the slave

If a SYNC operation is on-going, these additional fields are provided:

  • master_sync_left_bytes: Number of bytes left before SYNCing is complete
  • master_sync_last_io_seconds_ago: Number of seconds since last transfer I/O during a SYNC operation

If the link between master and slave is down, an additional field is provided:

  • master_link_down_since_seconds: Number of seconds since the link is down

The following field is always provided:

  • connected_slaves: Number of connected slaves

For each slave, the following line is added:

  • slaveXXX: id, ip address, port, state

Here is the meaning of all fields in the cpu section:

  • used_cpu_sys: System CPU consumed by the Redis server
  • used_cpu_user:User CPU consumed by the Redis server
  • used_cpu_sys_children: System CPU consumed by the background processes
  • used_cpu_user_children: User CPU consumed by the background processes

The commandstats section provides statistics based on the command type, including the number of calls, the total CPU time consumed by these commands, and the average CPU consumed per command execution.

For each command type, the following line is added:

  • cmdstat_XXX:calls=XXX,usec=XXX,usec_per_call=XXX

The cluster section currently only contains a unique field:

  • cluster_enabled: Indicate Redis cluster is enabled

The keyspace section provides statistics on the main dictionary of each database. The statistics are the number of keys, and the number of keys with an expiration.

For each database, the following line is added:

  • dbXXX:keys=XXX,expires=XXX

>LASTSAVE

Get the UNIX time stamp of the last successful save to disk

Available since 1.0.0.

Return the UNIX TIME of the last DB save executed with success. A client may check if a BGSAVE command succeeded reading the LASTSAVE value, then issuing a BGSAVE command and checking at regular intervals every N seconds if LASTSAVE changed.

Return value

Integer reply: an UNIX time stamp.

>MONITOR

Listen for all requests received by the server in real time

Available since 1.0.0.

MONITOR is a debugging command that streams back every command processed by the Redis server. It can help in understanding what is happening to the database. This command can both be used via redis-cli and via telnet.

The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a distributed caching system.

$ redis-cli monitor
1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
1339518087.877697 [0 127.0.0.1:60866] "dbsize"
1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6"
1339518096.506257 [0 127.0.0.1:60866] "get" "x"
1339518099.363765 [0 127.0.0.1:60866] "del" "x"
1339518100.544926 [0 127.0.0.1:60866] "get" "x"

Use SIGINT (Ctrl-C) to stop a MONITOR stream running via redis-cli.

$ telnet localhost 6379
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
MONITOR
+OK
+1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
+1339518087.877697 [0 127.0.0.1:60866] "dbsize"
+1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6"
+1339518096.506257 [0 127.0.0.1:60866] "get" "x"
+1339518099.363765 [0 127.0.0.1:60866] "del" "x"
+1339518100.544926 [0 127.0.0.1:60866] "get" "x"
QUIT
+OK
Connection closed by foreign host.

Manually issue the QUIT command to stop a MONITOR stream running via telnet.

Cost of running MONITOR

Because MONITOR streams back all commands, its use comes at a cost. The following (totally unscientific) benchmark numbers illustrate what the cost of running MONITOR can be.

Benchmark result without MONITOR running:

$ src/redis-benchmark -c 10 -n 100000 -q
PING_INLINE: 101936.80 requests per second
PING_BULK: 102880.66 requests per second
SET: 95419.85 requests per second
GET: 104275.29 requests per second
INCR: 93283.58 requests per second

Benchmark result with MONITOR running (redis-cli monitor > /dev/null):

$ src/redis-benchmark -c 10 -n 100000 -q
PING_INLINE: 58479.53 requests per second
PING_BULK: 59136.61 requests per second
SET: 41823.50 requests per second
GET: 45330.91 requests per second
INCR: 41771.09 requests per second

In this particular case, running a single MONITOR client can reduce the throughput by more than 50%. Running more MONITOR clients will reduce throughput even more.

Return value

Non standard return value, just dumps the received commands in an infinite flow.

>ROLE

Return the role of the instance in the context of replication

Available since 2.8.12.

Provide information on the role of a Redis instance in the context of replication, by returing if the instance is currently a master, slave, or sentinel. The command also returns additional information about the state of the replication (if the role is master or slave) or the list of monitored master names (if the role is sentinel).

Output format

The command returns an array of elements. The first element is the role of the instance, as one of the following three strings:

  • “master”
  • “slave”
  • “sentinel”

The additional elements of the array depends on the role.

Master output

An example of output when ROLE is called in a master instance:

1) "master"
2) (integer) 3129659
3) 1) 1) "127.0.0.1"
      2) "9001"
      3) "3129242"
   2) 1) "127.0.0.1"
      2) "9002"
      3) "3129543"

The master output is composed of the following parts:

  1. The string master.
  2. The current master replication offset, which is an offset that masters and slaves share to understand, in partial resynchronizations, the part of the replication stream the slave needs to fetch to continue.
  3. An array composed of three elements array representing the connected slaves. Every sub-array contains the slave IP, port, and the last acknowledged replication offset.

Slave output

An example of output when ROLE is called in a slave instance:

1) "slave"
2) "127.0.0.1"
3) (integer) 9000
4) "connected"
5) (integer) 3167038

The slave output is composed of the following parts:

  1. The string slave.
  2. The IP of the master.
  3. The port number of the master.
  4. The state of the replication from the point of view of the master, that can be connect (the instance needs to connect to its master), connecting (the slave-master connection is in progress), sync (the master and slave are trying to perform the synchronization), connected (the slave is online).
  5. The amount of data received from the slave so far in terms of master replication offset.

Sentinel output

An example of Sentinel output:

  1. “sentinel” 2) 1) “resque-master”
  2. “html-fragments-master”
  3. “stats-master”
  4. “metadata-master”

The sentinel output is composed of the following parts:

  1. The string sentinel.
  2. An array of master names monitored by this Sentinel instance.

Return value

Array reply: where the first element is one of master, slave, sentinel and the additional elements are role-specific as illustrated above.

History

  • This command was introduced in the middle of a Redis stable release, specifically with Redis 2.8.12.

Examples

redis>  ROLE

1) "master"
2) (integer) 0
3) (empty list or set)

>SAVE

Synchronously save the dataset to disk

Available since 1.0.0.

The SAVE commands performs a synchronous save of the dataset producing a point in time snapshot of all the data inside the Redis instance, in the form of an RDB file.

You almost never want to call SAVE in production environments where it will block all the other clients. Instead usually BGSAVE is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the SAVE command can be a good last resort to perform the dump of the latest dataset.

Please refer to the persistence documentation for detailed information.

Return value

Simple string reply: The commands returns OK on success.

>SHUTDOWN [NOSAVE] [SAVE]

Synchronously save the dataset to disk and then shut down the server

Available since 1.0.0.

The command behavior is the following:

  • Stop all the clients.
  • Perform a blocking SAVE if at least one save point is configured.
  • Flush the Append Only File if AOF is enabled.
  • Quit the server.

If persistence is enabled this commands makes sure that Redis is switched off without the lost of any data. This is not guaranteed if the client uses simply SAVE and then QUIT because other clients may alter the DB data between the two commands.

Note: A Redis instance that is configured for not persisting on disk (no AOF configured, nor “save” directive) will not dump the RDB file on SHUTDOWN, as usually you don’t want Redis instances used only for caching to block on when shutting down.

SAVE and NOSAVE modifiers

It is possible to specify an optional modifier to alter the behavior of the command. Specifically:

  • SHUTDOWN SAVE will force a DB saving operation even if no save points are configured.
  • SHUTDOWN NOSAVE will prevent a DB saving operation even if one or more save points are configured. (You can think at this variant as an hypothetical ABORT command that just stops the server).

Return value

Simple string reply on error. On success nothing is returned since the server quits and the connection is closed.

>SLAVEOF host port

Make the server a slave of another instance, or promote it as master

Available since 1.0.0.

The SLAVEOF command can change the replication settings of a slave on the fly. If a Redis server is already acting as slave, the command SLAVEOF NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form SLAVEOF hostname port will make the server a slave of another server listening at the specified hostname and port.

If a server is already a slave of some master, SLAVEOF hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset.

The form SLAVEOF NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the slave into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a slave.

Return value

Simple string reply

>SLOWLOG subcommand [argument]

Manages the Redis slow queries log

Available since 2.2.12.

This command is used in order to read and reset the Redis slow queries log.

Redis slow log overview

The Redis Slow Log is a system to log queries that exceeded a specified execution time. The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime).

You can configure the slow log with two parameters: slowlog-log-slower-than tells Redis what is the execution time, in microseconds, to exceed in order for the command to get logged. Note that a negative number disables the slow log, while a value of zero forces the logging of every command. slowlog-max-len is the length of the slow log. The minimum value is zero. When a new command is logged and the slow log is already at its maximum length, the oldest one is removed from the queue of logged commands in order to make space.

The configuration can be done by editing redis.conf or while the server is running using the CONFIG GET and CONFIG SET commands.

Reading the slow log

The slow log is accumulated in memory, so no file is written with information about the slow command executions. This makes the slow log remarkably fast at the point that you can enable the logging of all the commands (setting the slowlog-log-slower-than config parameter to zero) with minor performance hit.

To read the slow log the SLOWLOG GET command is used, that returns every entry in the slow log. It is possible to return only the N most recent entries passing an additional argument to the command (for instance SLOWLOG GET 10).

Note that you need a recent version of redis-cli in order to read the slow log output, since it uses some features of the protocol that were not formerly implemented in redis-cli (deeply nested multi bulk replies).

Output format

redis 127.0.0.1:6379> slowlog get 2
1) 1) (integer) 14
   2) (integer) 1309448221
   3) (integer) 15
   4) 1) "ping"
2) 1) (integer) 13
   2) (integer) 1309448128
   3) (integer) 30
   4) 1) "slowlog"
      2) "get"
      3) "100"

Every entry is composed of four fields:

  • A unique progressive identifier for every slow log entry.
  • The unix timestamp at which the logged command was processed.
  • The amount of time needed for its execution, in microseconds.
  • The array composing the arguments of the command.

The entry’s unique ID can be used in order to avoid processing slow log entries multiple times (for instance you may have a script sending you an email alert for every new slow log entry).

The ID is never reset in the course of the Redis server execution, only a server restart will reset it.

Obtaining the current length of the slow log

It is possible to get just the length of the slow log using the command SLOWLOG LEN.

Resetting the slow log.

You can reset the slow log using the SLOWLOG RESET command. Once deleted the information is lost forever.

>SYNC

Internal command used for replication

Available since 1.0.0.

Examples

Return value

>TIME

Return the current server time

Available since 2.6.0.

Time complexity: O(1)

The TIME command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. Basically the interface is very similar to the one of the gettimeofday system call.

Return value

Array reply, specifically:

A multi bulk reply containing two elements:

  • unix time in seconds.
  • microseconds.

Examples

redis>  TIME

1) "1413987731"
2) "279640"

redis>  TIME

1) "1413987731"
2) "280550"

list

>BLPOP key [key …] timeout

Remove and get the first element in a list, or block until one is available

Available since 2.0.0.

Time complexity: O(1)

BLPOP is a blocking list pop primitive. It is the blocking version of LPOP because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the head of the first list that is non-empty, with the given keys being checked in the order that they are given.

Non-blocking behavior

When BLPOP is called, if at least one of the specified keys contains a non-empty list, an element is popped from the head of the list and returned to the caller together with the key it was popped from.

Keys are checked in the order that they are given. Let’s say that the key list1 doesn’t exist and list2 and list3 hold non-empty lists. Consider the following command:

BLPOP list1 list2 list3 0

BLPOP guarantees to return an element from the list stored at list2 (since it is the first non empty list when checking list1, list2 and list3 in that order).

Blocking behavior

If none of the specified keys exist, BLPOP blocks the connection until another client performs an LPUSH or RPUSH operation against one of the keys.

Once new data is present on one of the lists, the client returns with the name of the key unblocking it and the popped value.

When BLPOP causes a client to block and a non-zero timeout is specified, the client will unblock returning a nil multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys.

The timeout argument is interpreted as an integer value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely.

What key is served first? What client? What element? Priority ordering details.

  • If the client tries to blocks for multiple keys, but at least one key contains elements, the returned key / element pair is the first key from left to right that has one or more elements. In this case the client is not blocked. So for instance BLPOP key1 key2 key3 key4 0, assuming that both key2 and key4 are non-empty, will always return an element from key2.
  • If multiple clients are blocked for the same key, the first client to be served is the one that was waiting for more time (the first that blocked for the key). Once a client is unblocked it does not retain any priority, when it blocks again with the next call to BLPOP it will be served accordingly to the number of clients already blocked for the same key, that will all be served before it (from the first to the last that blocked).
  • When a client is blocking for multiple keys at the same time, and elements are available at the same time in multiple keys (because of a transaction or a Lua script added elements to multiple lists), the client will be unblocked using the first key that received a push operation (assuming it has enough elements to serve our client, as there may be other clients as well waiting for this key). Basically after the execution of every command Redis will run a list of all the keys that received data AND that have at least a client blocked. The list is ordered by new element arrival time, from the first key that received data to the last. For every key processed, Redis will serve all the clients waiting for that key in a FIFO fashion, as long as there are elements in this key. When the key is empty or there are no longer clients waiting for this key, the next key that received new data in the previous command / transaction / script is processed, and so forth.

Behavior of BLPOP when multiple elements are pushed inside a list.

There are times when a list can receive multiple elements in the context of the same conceptual command:

  • Variadic push operations such as LPUSH mylist a b c.
  • After an EXEC of a MULTI block with multiple push operations against the same list.
  • Executing a Lua Script with Redis 2.6 or newer.

When multiple elements are pushed inside a list where there are clients blocking, the behavior is different for Redis 2.4 and Redis 2.6 or newer.

For Redis 2.6 what happens is that the command performing multiple pushes is executed, and only after the execution of the command the blocked clients are served. Consider this sequence of commands.

Client A:   BLPOP foo 0
Client B:   LPUSH foo a b c

If the above condition happens using a Redis 2.6 server or greater, Client A will be served with the c element, because after the LPUSH command the list contains c,b,a, so taking an element from the left means to return c.

Instead Redis 2.4 works in a different way: clients are served in the context of the push operation, so as long as LPUSH foo a b c starts pushing the first element to the list, it will be delivered to the Client A, that will receive a (the first element pushed).

The behavior of Redis 2.4 creates a lot of problems when replicating or persisting data into the AOF file, so the much more generic and semantically simpler behaviour was introduced into Redis 2.6 to prevent problems.

Note that for the same reason a Lua script or a MULTI/EXEC block may push elements into a list and afterward delete the list. In this case the blocked clients will not be served at all and will continue to be blocked as long as no data is present on the list after the execution of a single command, transaction, or script.

BLPOP inside a MULTI / EXEC transaction

BLPOP can be used with pipelining (sending multiple commands and reading the replies in batch), however this setup makes sense almost solely when it is the last command of the pipeline.

Using BLPOP inside a MULTI / EXEC block does not make a lot of sense as it would require blocking the entire server in order to execute the block atomically, which in turn does not allow other clients to perform a push operation. For this reason the behavior of BLPOP inside MULTI / EXEC when the list is empty is to return a nil multi-bulk reply, which is the same thing that happens when the timeout is reached.

If you like science fiction, think of time flowing at infinite speed inside a MULTI / EXEC block…

Return value

Array reply: specifically:

  • A nil multi-bulk when no element could be popped and the timeout expired.
  • A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element.

Examples

redis> DEL list1 list2 (integer) 0
redis> RPUSH list1 a b c (integer) 3
redis> BLPOP list1 list2 0 1) "list1"
2) "a"

Reliable queues

When BLPOP returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever.

This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the BRPOPLPUSH command, that is a variant of BLPOP that adds the returned element to a target list before returning it to the client.

Pattern: Event notification

Using blocking list operations it is possible to mount different blocking primitives. For instance for some application you may need to block waiting for elements into a Redis Set, so that as far as a new element is added to the Set, it is possible to retrieve it without resort to polling. This would require a blocking version of SPOP that is not available, but using blocking list operations we can easily accomplish this task.

The consumer will do:

LOOP forever
    WHILE SPOP(key) returns elements
        ... process elements ...
    END
    BRPOP helper_key
END

While in the producer side we’ll use simply:

MULTI
SADD key element
LPUSH helper_key x
EXEC

>BRPOP key [key …] timeout

Remove and get the last element in a list, or block until one is available

Available since 2.0.0.

Time complexity: O(1)

BRPOP is a blocking list pop primitive. It is the blocking version of RPOP because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the tail of the first list that is non-empty, with the given keys being checked in the order that they are given.

See the BLPOP documentation for the exact semantics, since BRPOP is identical to BLPOP with the only difference being that it pops elements from the tail of a list instead of popping from the head.

Return value

Array reply: specifically:

  • A nil multi-bulk when no element could be popped and the timeout expired.
  • A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element.

Examples

redis> DEL list1 list2 (integer) 0
redis> RPUSH list1 a b c (integer) 3
redis> BRPOP list1 list2 0 1) "list1"
2) "c"

>BRPOPLPUSH source destination timeout

Pop a value from a list, push it to another list and return it; or block until one is available

Available since 2.2.0.

Time complexity: O(1)

BRPOPLPUSH is the blocking variant of RPOPLPUSH. When source contains elements, this command behaves exactly like RPOPLPUSH. When source is empty, Redis will block the connection until another client pushes to it or until timeout is reached. A timeout of zero can be used to block indefinitely.

See RPOPLPUSH for more information.

Return value

Bulk string reply: the element being popped from source and pushed to destination. If timeout is reached, a Null reply is returned.

Pattern: Reliable queue

Please see the pattern description in the RPOPLPUSH documentation.

Pattern: Circular list

Please see the pattern description in the RPOPLPUSH documentation.

>LINDEX key index

Get an element from a list by its index

Available since 1.0.0.

Time complexity: O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1).

Returns the element at index index in the list stored at key. The index is zero-based, so 0 means the first element, 1 the second element and so on. Negative indices can be used to designate elements starting at the tail of the list. Here, -1 means the last element, -2 means the penultimate and so forth.

When the value at key is not a list, an error is returned.

Return value

Bulk string reply: the requested element, or nil when index is out of range.

Examples

redis>  LPUSH mylist “World”

(integer) 1

redis>  LPUSH mylist “Hello”

(integer) 2

redis>  LINDEX mylist 0

"Hello"

redis>  LINDEX mylist -1

"World"

redis>  LINDEX mylist 3

(nil)

>LINSERT key BEFORE|AFTER pivot value

Insert an element before or after another element in a list

Available since 2.2.0.

Time complexity: O(N) where N is the number of elements to traverse before seeing the value pivot. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).

Inserts value in the list stored at key either before or after the reference value pivot.

When key does not exist, it is considered an empty list and no operation is performed.

An error is returned when key exists but does not hold a list value.

Return value

Integer reply: the length of the list after the insert operation, or -1 when the value pivot was not found.

Examples

redis>  RPUSH mylist “Hello”

(integer) 1

redis>  RPUSH mylist “World”

(integer) 2

redis>  LINSERT mylist BEFORE “World” “There”

(integer) 3

redis>  LRANGE mylist 0 -1

1) "Hello"
2) "There"
3) "World"

>LLEN key

Get the length of a list

Available since 1.0.0.

Time complexity: O(1)

Returns the length of the list stored at key. If key does not exist, it is interpreted as an empty list and 0 is returned. An error is returned when the value stored at key is not a list.

Return value

Integer reply: the length of the list at key.

Examples

redis>  LPUSH mylist “World”

(integer) 1

redis>  LPUSH mylist “Hello”

(integer) 2

redis>  LLEN mylist

(integer) 2

>LPOP key

Remove and get the first element in a list

Available since 1.0.0.

Time complexity: O(1)

Removes and returns the first element of the list stored at key.

Return value

Bulk string reply: the value of the first element, or nil when key does not exist.

Examples

redis>  RPUSH mylist “one”

(integer) 1

redis>  RPUSH mylist “two”

(integer) 2

redis>  RPUSH mylist “three”

(integer) 3

redis>  LPOP mylist

"one"

redis>  LRANGE mylist 0 -1

1) "two"
2) "three"

>LPUSH key value [value …]

Prepend one or multiple values to a list

Available since 1.0.0.

Time complexity: O(1)

Insert all the specified values at the head of the list stored at key. If key does not exist, it is created as empty list before performing the push operations. When key holds a value that is not a list, an error is returned.

It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command LPUSH mylist a b c will result into a list containing c as first element, b as second element and a as third element.

Return value

Integer reply: the length of the list after the push operations.

History

  • = 2.4: Accepts multiple value arguments. In Redis versions older than 2.4 it was possible to push a single value per command.

Examples

redis>  LPUSH mylist “world”

(integer) 1

redis>  LPUSH mylist “hello”

(integer) 2

redis>  LRANGE mylist 0 -1

1) "hello"
2) "world"

>LPUSHX key value

Prepend a value to a list, only if the list exists

Available since 2.2.0.

Time complexity: O(1)

Inserts value at the head of the list stored at key, only if key already exists and holds a list. In contrary to LPUSH, no operation will be performed when key does not yet exist.

Return value

Integer reply: the length of the list after the push operation.

Examples

redis>  LPUSH mylist “World”

(integer) 1

redis>  LPUSHX mylist “Hello”

(integer) 2

redis>  LPUSHX myotherlist “Hello”

(integer) 0

redis>  LRANGE mylist 0 -1

1) "Hello"
2) "World"

redis>  LRANGE myotherlist 0 -1

(empty list or set)

>LRANGE key start stop

Get a range of elements from a list

Available since 1.0.0.

Time complexity: O(S+N) where S is the distance of start offset from HEAD for small lists, from nearest end (HEAD or TAIL) for large lists; and N is the number of elements in the specified range.

Returns the specified elements of the list stored at key. The offsets start and stop are zero-based indexes, with 0 being the first element of the list (the head of the list), 1 being the next element and so on.

These offsets can also be negative numbers indicating offsets starting at the end of the list. For example, -1 is the last element of the list, -2 the penultimate, and so on.

Consistency with range functions in various programming languages

Note that if you have a list of numbers from 0 to 100, LRANGE list 0 10 will return 11 elements, that is, the rightmost item is included. This may or may not be consistent with behavior of range-related functions in your programming language of choice (think Ruby’s Range.new, Array#slice or Python’s range() function).

Out-of-range indexes

Out of range indexes will not produce an error. If start is larger than the end of the list, an empty list is returned. If stop is larger than the actual end of the list, Redis will treat it like the last element of the list.

Return value

Array reply: list of elements in the specified range.

Examples

redis>  RPUSH mylist “one”

(integer) 1

redis>  RPUSH mylist “two”

(integer) 2

redis>  RPUSH mylist “three”

(integer) 3

redis>  LRANGE mylist 0 0

1) "one"

redis>  LRANGE mylist -3 2

1) "one"
2) "two"
3) "three"

redis>  LRANGE mylist -100 100

1) "one"
2) "two"
3) "three"

redis>  LRANGE mylist 5 10

(empty list or set)

>LREM key count value

Remove elements from a list

Available since 1.0.0.

Time complexity: O(N) where N is the length of the list.

Removes the first count occurrences of elements equal to value from the list stored at key. The count argument influences the operation in the following ways:

  • count > 0: Remove elements equal to value moving from head to tail.
  • count < 0: Remove elements equal to value moving from tail to head.
  • count = 0: Remove all elements equal to value.

For example, LREM list -2 “hello” will remove the last two occurrences of “hello” in the list stored at list.

Note that non-existing keys are treated like empty lists, so when key does not exist, the command will always return 0.

Return value

Integer reply: the number of removed elements.

Examples

redis>  RPUSH mylist “hello”

(integer) 1

redis>  RPUSH mylist “hello”

(integer) 2

redis>  RPUSH mylist “foo”

(integer) 3

redis>  RPUSH mylist “hello”

(integer) 4

redis>  LREM mylist -2 “hello”

(integer) 2

redis>  LRANGE mylist 0 -1

1) "hello"
2) "foo"

>LSET key index value

Set the value of an element in a list by its index

Available since 1.0.0.

Time complexity: O(N) where N is the length of the list. Setting either the first or the last element of the list is O(1).

Sets the list element at index to value. For more information on the index argument, see LINDEX.

An error is returned for out of range indexes.

Return value

Simple string reply

Examples

redis>  RPUSH mylist “one”

(integer) 1

redis>  RPUSH mylist “two”

(integer) 2

redis>  RPUSH mylist “three”

(integer) 3

redis>  LSET mylist 0 “four”

OK

redis>  LSET mylist -2 “five”

OK

redis>  LRANGE mylist 0 -1

1) "four"
2) "five"
3) "three"

>LTRIM key start stop

Trim a list to the specified range

Available since 1.0.0.

Time complexity: O(N) where N is the number of elements to be removed by the operation.

Trim an existing list so that it will contain only the specified range of elements specified. Both start and stop are zero-based indexes, where 0 is the first element of the list (the head), 1 the next element and so on.

For example: LTRIM foobar 0 2 will modify the list stored at foobar so that only the first three elements of the list will remain.

start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.

Out of range indexes will not produce an error: if start is larger than the end of the list, or start > end, the result will be an empty list (which causes key to be removed). If end is larger than the end of the list, Redis will treat it like the last element of the list.

A common use of LTRIM is together with LPUSH / RPUSH. For example:

LPUSH mylist someelement
LTRIM mylist 0 99

This pair of commands will push a new element on the list, while making sure that the list will not grow larger than 100 elements. This is very useful when using Redis to store logs for example. It is important to note that when used in this way LTRIM is an O(1) operation because in the average case just one element is removed from the tail of the list.

Return value

Simple string reply

Examples

redis>  RPUSH mylist “one”

(integer) 1

redis>  RPUSH mylist “two”

(integer) 2

redis>  RPUSH mylist “three”

(integer) 3

redis>  LTRIM mylist 1 -1

OK

redis>  LRANGE mylist 0 -1

1) "two"
2) "three"

>RPOP key

Remove and get the last element in a list

Available since 1.0.0.

Time complexity: O(1)

Removes and returns the last element of the list stored at key.

Return value

Bulk string reply: the value of the last element, or nil when key does not exist.

Examples

redis>  RPUSH mylist “one”

(integer) 1

redis>  RPUSH mylist “two”

(integer) 2

redis>  RPUSH mylist “three”

(integer) 3

redis>  RPOP mylist

"three"

redis>  LRANGE mylist 0 -1

1) "one"
2) "two"

>RPOPLPUSH source destination

Remove the last element in a list, append it to another list and return it

Available since 1.2.0.

Time complexity: O(1)

Atomically returns and removes the last element (tail) of the list stored at source, and pushes the element at the first element (head) of the list stored at destination.

For example: consider source holding the list a,b,c, and destination holding the list x,y,z. Executing RPOPLPUSH results in source holding a,b and destination holding c,x,y,z.

If source does not exist, the value nil is returned and no operation is performed. If source and destination are the same, the operation is equivalent to removing the last element from the list and pushing it as first element of the list, so it can be considered as a list rotation command.

Return value

Bulk string reply: the element being popped and pushed.

Examples

redis>  RPUSH mylist “one”

(integer) 1

redis>  RPUSH mylist “two”

(integer) 2

redis>  RPUSH mylist “three”

(integer) 3

redis>  RPOPLPUSH mylist myotherlist

"three"

redis>  LRANGE mylist 0 -1

1) "one"
2) "two"

redis>  LRANGE myotherlist 0 -1

1) "three"

Pattern: Reliable queue

Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and waiting for this values in the consumer side using RPOP (using polling), or BRPOP if the client is better served by a blocking operation.

However in this context the obtained queue is not reliable as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process.

RPOPLPUSH (or BRPOPLPUSH for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a processing list. It will use the LREM command in order to remove the message from the processing list once the message has been processed.

An additional client may monitor the processing list for items that remain there for too much time, and will push those timed out items into the queue again if needed.

Pattern: Circular list

Using RPOPLPUSH with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single LRANGE operation.

The above pattern works even if the following two conditions: * There are multiple clients rotating the list: they’ll fetch different elements, until all the elements of the list are visited, and the process restarts. * Even if other clients are actively pushing new items at the end of the list.

The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers.

Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration.

>RPUSH key value [value …]

Append one or multiple values to a list

Available since 1.0.0.

Time complexity: O(1)

Insert all the specified values at the tail of the list stored at key. If key does not exist, it is created as empty list before performing the push operation. When key holds a value that is not a list, an error is returned.

It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the tail of the list, from the leftmost element to the rightmost element. So for instance the command RPUSH mylist a b c will result into a list containing a as first element, b as second element and c as third element.

Return value

Integer reply: the length of the list after the push operation.

History

  • = 2.4: Accepts multiple value arguments. In Redis versions older than 2.4 it was possible to push a single value per command.

Examples

redis>  RPUSH mylist “hello”

(integer) 1

redis>  RPUSH mylist “world”

(integer) 2

redis>  LRANGE mylist 0 -1

1) "hello"
2) "world"

>RPUSHX key value

Append a value to a list, only if the list exists

Available since 2.2.0.

Time complexity: O(1)

Inserts value at the tail of the list stored at key, only if key already exists and holds a list. In contrary to RPUSH, no operation will be performed when key does not yet exist.

Return value

Integer reply: the length of the list after the push operation.

Examples

redis>  RPUSH mylist “Hello”

(integer) 1

redis>  RPUSHX mylist “World”

(integer) 2

redis>  RPUSHX myotherlist “World”

(integer) 0

redis>  LRANGE mylist 0 -1

1) "Hello"
2) "World"

redis>  LRANGE myotherlist 0 -1

(empty list or set)

generic

>DEL key [key …]

Delete a key

Available since 1.0.0.

Time complexity: O(N) where N is the number of keys that will be removed. When a key to remove holds a value other than a string, the individual complexity for this key is O(M) where M is the number of elements in the list, set, sorted set or hash. Removing a single key that holds a string value is O(1).

Removes the specified keys. A key is ignored if it does not exist.

Return value

Integer reply: The number of keys that were removed.

Examples

redis>  SET key1 “Hello”

OK

redis>  SET key2 “World”

OK

redis>  DEL key1 key2 key3

(integer) 2

>DUMP key

Return a serialized version of the value stored at the specified key.

Available since 2.6.0.

Time complexity: O(1) to access the key and additional O(NM) to serialized it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1M) where M is small, so simply O(1).

Serialize the value stored at key in a Redis-specific format and return it to the user. The returned value can be synthesized back into a Redis key using the RESTORE command.

The serialization format is opaque and non-standard, however it has a few semantical characteristics:

  • It contains a 64-bit checksum that is used to make sure errors will be detected. The RESTORE command makes sure to check the checksum before synthesizing a key using the serialized value.
  • Values are encoded in the same format used by RDB.
  • An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value.

The serialized value does NOT contain expire information. In order to capture the time to live of the current value the PTTL command should be used.

If key does not exist a nil bulk reply is returned.

Return value

Bulk string reply: the serialized value.

Examples

redis>  SET mykey 10

OK

redis>  DUMP mykey

"\u0000\xC0\n\u0006\u0000\xF8r?\xC5\xFB\xFB_("

>EXISTS key

Determine if a key exists

Available since 1.0.0.

Time complexity: O(1)

Returns if key exists.

Return value

Integer reply, specifically:

  • 1 if the key exists.
  • 0 if the key does not exist.

Examples

redis>  SET key1 “Hello”

OK

redis>  EXISTS key1

(integer) 1

redis>  EXISTS key2

(integer) 0

>EXPIRE key seconds

Set a key’s time to live in seconds

Available since 1.0.0.

Time complexity: O(1)

Set a timeout on key. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be volatile in Redis terminology.

The timeout is cleared only when the key is removed using the DEL command or overwritten using the SET or GETSET commands. This means that all the operations that conceptually alter the value stored at the key without replacing it with a new one will leave the timeout untouched. For instance, incrementing the value of a key with INCR, pushing a new value into a list with LPUSH, or altering the field value of a hash with HSET are all operations that will leave the timeout untouched.

The timeout can also be cleared, turning the key back into a persistent key, using the PERSIST command.

If a key is renamed with RENAME, the associated time to live is transferred to the new key name.

If a key is overwritten by RENAME, like in the case of an existing key Key_A that is overwritten by a call like RENAME Key_B Key_A, it does not matter if the original Key_A had a timeout associated or not, the new key Key_A will inherit all the characteristics of Key_B.

Refreshing expires

It is possible to call EXPIRE using as argument a key that already has an existing expire set. In this case the time to live of a key is updated to the new value. There are many useful applications for this, an example is documented in the Navigation session pattern section below.

Differences in Redis prior 2.1.3

In Redis versions prior 2.1.3 altering a key with an expire set using a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed.

Return value

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if key does not exist or the timeout could not be set.

Examples

redis>  SET mykey “Hello”

OK

redis>  EXPIRE mykey 10

(integer) 1

redis>  TTL mykey

(integer) 10

redis>  SET mykey “Hello World”

OK

redis>  TTL mykey

(integer) -1

Pattern: Navigation session

Imagine you have a web service and you are interested in the latest N pages recently visited by your users, such that each adjacent page view was not performed more than 60 seconds after the previous. Conceptually you may think at this set of page views as a Navigation session if your user, that may contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products.

You can easily model this pattern in Redis using the following strategy: every time the user does a page view you call the following commands:

MULTI
RPUSH pagewviews.user:&lt;userid> http://.....
EXPIRE pagewviews.user:&lt;userid> 60
EXEC

If the user will be idle more than 60 seconds, the key will be deleted and only subsequent page views that have less than 60 seconds of difference will be recorded.

This pattern is easily modified to use counters using INCR instead of lists using RPUSH.

Appendix: Redis expires

Keys with an expire

Normally Redis keys are created without an associated time to live. The key will simply live forever, unless it is removed by the user in an explicit way, for instance using the DEL command.

The EXPIRE family of commands is able to associate an expire to a given key, at the cost of some additional memory used by the key. When a key has an expire set, Redis will make sure to remove the key when the specified amount of time elapsed.

The key time to live can be updated or entirely removed using the EXPIRE and PERSIST command (or other strictly related commands).

Expire accuracy

In Redis 2.4 the expire might not be pin-point accurate, and it could be between zero to one seconds out.

Since Redis 2.6 the expire error is from 0 to 1 milliseconds.

Expires and persistence

Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active.

For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time).

Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds.

How Redis expires keys

Redis keys are expired in two ways: a passive way, and an active way.

A key is actively expired simply when some client tries to access it, and the key is found to be timed out.

Of course this is not enough as there are expired keys that will never be accessed again. This keys should be expired anyway, so periodically Redis test a few keys at random among keys with an expire set. All the keys that are already expired are deleted from the keyspace.

Specifically this is what Redis does 10 times per second:

  1. Test 100 random keys from the set of keys with an associated expire.
  2. Delete all the keys found expired.
  3. If more than 25 keys were expired, start again from step 1.

This is a trivial probabilistic algorithm, basically the assumption is that our sample is representative of the whole key space, and we continue to expire until the percentage of keys that are likely to be expired is under 25%

This means that at any given moment the maximum amount of keys already expired that are using memory is at max equal to max amount of write operations per second divided by 4.

In order to obtain a correct behavior without sacrificing consistency, when a key expires, a DEL operation is synthesized in both the AOF file and gains all the attached slaves. This way the expiration process is centralized in the master instance, and there is no chance of consistency errors.

However while the slaves connected to a master will not expire keys independently (but will wait for the DEL coming from the master), they’ll still take the full state of the expires existing in the dataset, so when a slave is elected to a master it will be able to expire the keys independently, fully acting as a master.

>EXPIREAT key timestamp

Set the expiration for a key as a UNIX timestamp

Available since 1.2.0.

Time complexity: O(1)

EXPIREAT has the same effect and semantic as EXPIRE, but instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute Unix timestamp (seconds since January 1, 1970).

Please for the specific semantics of the command refer to the documentation of EXPIRE.

Background

EXPIREAT was introduced in order to convert relative timeouts to absolute timeouts for the AOF persistence mode. Of course, it can be used directly to specify that a given key should expire at a given time in the future.

Return value

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if key does not exist or the timeout could not be set (see: EXPIRE).

Examples

redis>  SET mykey “Hello”

OK

redis>  EXISTS mykey

(integer) 1

redis>  EXPIREAT mykey 1293840000

(integer) 1

redis>  EXISTS mykey

(integer) 0

>KEYS pattern

Find all keys matching the given pattern

Available since 1.0.0.

Time complexity: O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.

Returns all keys matching pattern.

While the time complexity for this operation is O(N), the constant times are fairly low. For example, Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds.

Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don’t use KEYS in your regular application code. If you’re looking for a way to find keys in a subset of your keyspace, consider using SCAN or sets.

Supported glob-style patterns:

  • h?llo matches hello, hallo and hxllo
  • h*llo matches hllo and heeeello
  • h[ae]llo matches hello and hallo, but not hillo

Use \ to escape special characters if you want to match them verbatim.

Return value

Array reply: list of keys matching pattern.

Examples

redis>  MSET one 1 two 2 three 3 four 4

OK

redis>  KEYS o

1) "four"
2) "one"
3) "two"

redis>  KEYS t??

1) "two"

redis>  KEYS *

1) "four"
2) "one"
3) "two"
4) "three"

>MIGRATE host port key destination-db timeout [COPY] [REPLACE]

Atomically transfer a key from a Redis instance to another one.

Available since 2.6.0.

Time complexity: This command actually executes a DUMP+DEL in the source instance, and a RESTORE in the target instance. See the pages of these commands for time complexity. Also an O(N) data transfer between the two instances is performed.

Atomically transfer a key from a source Redis instance to a destination Redis instance. On success the key is deleted from the original instance and is guaranteed to exist in the target instance.

The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs.

The command internally uses DUMP to generate the serialized version of the key value, and RESTORE in order to synthesize the key in the target instance. The source instance acts as a client for the target instance. If the target instance returns OK to the RESTORE command, the source instance deletes the key using DEL.

The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds.

MIGRATE needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error - IOERR returned. When this happens the following two cases are possible:

  • The key may be on both the instances.
  • The key may be only in the source instance.

It is not possible for the key to get lost in the event of a timeout, but the client calling MIGRATE, in the event of a timeout error, should check if the key is also present in the target instance and act accordingly.

When any other error is returned (starting with ERR) MIGRATE guarantees that the key is still only present in the originating instance (unless a key with the same name was also already present on the target instance).

On success OK is returned.

Options

  • COPY – Do not remove the key from the local instance.
  • REPLACE – Replace existing key on the remote instance.

COPY and REPLACE will be available in 3.0 and are not available in 2.6 or 2.8

Return value

Simple string reply: The command returns OK on success.

>MOVE key db

Move a key to another database

Available since 1.0.0.

Time complexity: O(1)

Move key from the currently selected database (see SELECT) to the specified destination database. When key already exists in the destination database, or it does not exist in the source database, it does nothing. It is possible to use MOVE as a locking primitive because of this.

Return value

Integer reply, specifically:

  • 1 if key was moved.
  • 0 if key was not moved.

>OBJECT subcommand [arguments [arguments …]]

Inspect the internals of Redis objects

Available since 2.2.3.

Time complexity: O(1) for all the currently implemented subcommands.

The OBJECT command allows to inspect the internals of Redis Objects associated with keys. It is useful for debugging or to understand if your keys are using the specially encoded data types to save space. Your application may also use the information reported by the OBJECT command to implement application level key eviction policies when using Redis as a Cache.

The OBJECT command supports multiple sub commands:

  • OBJECT REFCOUNT <key> returns the number of references of the value associated with the specified key. This command is mainly useful for debugging.
  • OBJECT ENCODING <key> returns the kind of internal representation used in order to store the value associated with a key.
  • OBJECT IDLETIME <key> returns the number of seconds since the object stored at the specified key is idle (not requested by read or write operations). While the value is returned in seconds the actual resolution of this timer is 10 seconds, but may vary in future implementations.

Objects can be encoded in different ways:

  • Strings can be encoded as raw (normal string encoding) or int (strings representing integers in a 64 bit signed interval are encoded in this way in order to save space).
  • Lists can be encoded as ziplist or linkedlist. The ziplist is the special representation that is used to save space for small lists.
  • Sets can be encoded as intset or hashtable. The intset is a special encoding used for small sets composed solely of integers.
  • Hashes can be encoded as zipmap or hashtable. The zipmap is a special encoding used for small hashes.
  • Sorted Sets can be encoded as ziplist or skiplist format. As for the List type small sorted sets can be specially encoded using ziplist, while the skiplist encoding is the one that works with sorted sets of any size.

All the specially encoded types are automatically converted to the general type once you perform an operation that makes it impossible for Redis to retain the space saving encoding.

Return value

Different return values are used for different subcommands.

  • Subcommands refcount and idletime return integers.
  • Subcommand encoding returns a bulk reply.

If the object you try to inspect is missing, a null bulk reply is returned.

Examples

redis> lpush mylist "Hello World" (integer) 4
redis> object refcount mylist (integer) 1
redis> object encoding mylist "ziplist"
redis> object idletime mylist (integer) 10

In the following example you can see how the encoding changes once Redis is no longer able to use the space saving encoding.

redis> set foo 1000 OK
redis> object encoding foo "int"
redis> append foo bar (integer) 7
redis> get foo "1000bar"
redis> object encoding foo "raw"

>PERSIST key

Remove the expiration from a key

Available since 2.2.0.

Time complexity: O(1)

Remove the existing timeout on key, turning the key from volatile (a key with an expire set) to persistent (a key that will never expire as no timeout is associated).

Return value

Integer reply, specifically:

  • 1 if the timeout was removed.
  • 0 if key does not exist or does not have an associated timeout.

Examples

redis>  SET mykey “Hello”

OK

redis>  EXPIRE mykey 10

(integer) 1

redis>  TTL mykey

(integer) 10

redis>  PERSIST mykey

(integer) 1

redis>  TTL mykey

(integer) -1

>PEXPIRE key milliseconds

Set a key’s time to live in milliseconds

Available since 2.6.0.

Time complexity: O(1)

This command works exactly like EXPIRE but the time to live of the key is specified in milliseconds instead of seconds.

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if key does not exist or the timeout could not be set.

Examples

redis>  SET mykey “Hello”

OK

redis>  PEXPIRE mykey 1500

(integer) 1

redis>  TTL mykey

(integer) 1

redis>  PTTL mykey

(integer) 1498

>PEXPIREAT key milliseconds-timestamp

Set the expiration for a key as a UNIX timestamp specified in milliseconds

Available since 2.6.0.

Time complexity: O(1)

PEXPIREAT has the same effect and semantic as EXPIREAT, but the Unix time at which the key will expire is specified in milliseconds instead of seconds.

Return value

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if key does not exist or the timeout could not be set (see: EXPIRE).

Examples

redis>  SET mykey “Hello”

OK

redis>  PEXPIREAT mykey 1555555555005

(integer) 1

redis>  TTL mykey

(integer) 141567773

redis>  PTTL mykey

(integer) 141567772643

>PTTL key

Get the time to live for a key in milliseconds

Available since 2.6.0.

Time complexity: O(1)

Like TTL this command returns the remaining time to live of a key that has an expire set, with the sole difference that TTL returns the amount of remaining time in seconds while PTTL returns it in milliseconds.

In Redis 2.6 or older the command returns -1 if the key does not exist or if the key exist but has no associated expire.

Starting with Redis 2.8 the return value in case of error changed:

  • The command returns -2 if the key does not exist.
  • The command returns -1 if the key exists but has no associated expire.

Return value

Integer reply: TTL in milliseconds, or a negative value in order to signal an error (see the description above).

Examples

redis>  SET mykey “Hello”

OK

redis>  EXPIRE mykey 1

(integer) 1

redis>  PTTL mykey

(integer) 999

>RANDOMKEY

Return a random key from the keyspace

Available since 1.0.0.

Time complexity: O(1)

Return a random key from the currently selected database.

Return value

Bulk string reply: the random key, or nil when the database is empty.

>RENAME key newkey

Rename a key

Available since 1.0.0.

Time complexity: O(1)

Renames key to newkey. It returns an error when the source and destination names are the same, or when key does not exist. If newkey already exists it is overwritten, when this happens RENAME executes an implicit DEL operation, so if the deleted key contains a very big value it may cause high latency even if RENAME itself is usually a constant-time operation.

Return value

Simple string reply

Examples

redis>  SET mykey “Hello”

OK

redis>  RENAME mykey myotherkey

OK

redis>  GET myotherkey

"Hello"

>RENAMENX key newkey

Rename a key, only if the new key does not exist

Available since 1.0.0.

Time complexity: O(1)

Renames key to newkey if newkey does not yet exist. It returns an error under the same conditions as RENAME.

Return value

Integer reply, specifically:

  • 1 if key was renamed to newkey.
  • 0 if newkey already exists.

Examples

redis>  SET mykey “Hello”

OK

redis>  SET myotherkey “World”

OK

redis>  RENAMENX mykey myotherkey

(integer) 0

redis>  GET myotherkey

"World"

>RESTORE key ttl serialized-value

Create a key using the provided serialized value, previously obtained using DUMP.

Available since 2.6.0.

Time complexity: O(1) to create the new key and additional O(NM) to recostruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1M) where M is small, so simply O(1). However for sorted set values the complexity is O(NMlog(N)) because inserting values into sorted sets is O(log(N)).

Create a key associated with a value that is obtained by deserializing the provided serialized value (obtained via DUMP).

If ttl is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set.

RESTORE checks the RDB version and data checksum. If they don’t match an error is returned.

Return value

Simple string reply: The command returns OK on success.

Examples

redis> DEL mykey 0
redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\ x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\
    xff\x04\x00u#&lt;\xc0;.\xe9\xdd"
OK
redis> TYPE mykey list
redis> LRANGE mykey 0 -1 1) "1"
2) "2"
3) "3"

>SORT key [BY pattern] [LIMIT offset count] [GET pattern [GET pattern …]] [ASC|DESC] [ALPHA] [STORE destination]

Sort the elements in a list, set or sorted set

Available since 1.0.0.

Time complexity: O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is currently O(N) as there is a copy step that will be avoided in next releases.

Returns or stores the elements contained in the list, set or sorted set at key. By default, sorting is numeric and elements are compared by their value interpreted as double precision floating point number. This is SORT in its simplest form:

SORT mylist

Assuming mylist is a list of numbers, this command will return the same list with the elements sorted from small to large. In order to sort the numbers from large to small, use the DESC modifier:

SORT mylist DESC

When mylist contains string values and you want to sort them lexicographically, use the ALPHA modifier:

SORT mylist ALPHA

Redis is UTF-8 aware, assuming you correctly set the !LC_COLLATE environment variable.

The number of returned elements can be limited using the LIMIT modifier. This modifier takes the offset argument, specifying the number of elements to skip and the count argument, specifying the number of elements to return from starting at offset. The following example will return 10 elements of the sorted version of mylist, starting at element 0 (offset is zero-based):

SORT mylist LIMIT 0 10

Almost all modifiers can be used together. The following example will return the first 5 elements, lexicographically sorted in descending order:

SORT mylist LIMIT 0 5 ALPHA DESC

Sorting by external keys

Sometimes you want to sort elements using external keys as weights to compare instead of comparing the actual elements in the list, set or sorted set. Let’s say the list mylist contains the elements 1, 2 and 3 representing unique IDs of objects stored in object_1, object_2 and object_3. When these objects have associated weights stored in weight_1, weight_2 and weight_3, SORT can be instructed to use these weights to sort mylist with the following statement:

SORT mylist BY weight_*

The BY option takes a pattern (equal to weight_* in this example) that is used to generate the keys that are used for sorting. These key names are obtained substituting the first occurrence of * with the actual value of the element in the list (1, 2 and 3 in this example).

Skip sorting the elements

The BY option can also take a non-existent key, which causes SORT to skip the sorting operation. This is useful if you want to retrieve external keys (see the GET option below) without the overhead of sorting.

SORT mylist BY nosort

Retrieving external keys

Our previous example returns just the sorted IDs. In some cases, it is more useful to get the actual objects instead of their IDs (object_1, object_2 and object_3). Retrieving external keys based on the elements in a list, set or sorted set can be done with the following command:

SORT mylist BY weight_* GET object_*

The GET option can be used multiple times in order to get more keys for every element of the original list, set or sorted set.

It is also possible to GET the element itself using the special pattern #:

SORT mylist BY weight_* GET object_* GET #

Storing the result of a SORT operation

By default, SORT returns the sorted elements to the client. With the STORE option, the result will be stored as a list at the specified key instead of being returned to the client.

SORT mylist BY weight_* STORE resultkey

An interesting pattern using SORT … STORE consists in associating an EXPIRE timeout to the resulting key so that in applications where the result of a SORT operation can be cached for some time. Other clients will use the cached list instead of calling SORT for every request. When the key will timeout, an updated version of the cache can be created by calling SORT … STORE again.

Note that for correctly implementing this pattern it is important to avoid multiple clients rebuilding the cache at the same time. Some kind of locking is needed here (for instance using SETNX).

Using hashes in BY and GET

It is possible to use BY and GET options against hash fields with the following syntax:

SORT mylist BY weight_*->fieldname GET object_*->fieldname

The string -> is used to separate the key name from the hash field name. The key is substituted as documented above, and the hash stored at the resulting key is accessed to retrieve the specified hash field.

Return value

Array reply: list of sorted elements.

>TTL key

Get the time to live for a key

Available since 1.0.0.

Time complexity: O(1)

Returns the remaining time to live of a key that has a timeout. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset.

In Redis 2.6 or older the command returns -1 if the key does not exist or if the key exist but has no associated expire.

Starting with Redis 2.8 the return value in case of error changed:

  • The command returns -2 if the key does not exist.
  • The command returns -1 if the key exists but has no associated expire.

See also the PTTL command that returns the same information with milliseconds resolution (Only available in Redis 2.6 or greater).

Return value

Integer reply: TTL in seconds, or a negative value in order to signal an error (see the description above).

Examples

redis>  SET mykey “Hello”

OK

redis>  EXPIRE mykey 10

(integer) 1

redis>  TTL mykey

(integer) 10

>TYPE key

Determine the type stored at key

Available since 1.0.0.

Time complexity: O(1)

Returns the string representation of the type of the value stored at key. The different types that can be returned are: string, list, set, zset and hash.

Return value

Simple string reply: type of key, or none when key does not exist.

Examples

redis>  SET key1 “value”

OK

redis>  LPUSH key2 “value”

(integer) 1

redis>  SADD key3 “value”

(integer) 1

redis>  TYPE key1

string

redis>  TYPE key2

list

redis>  TYPE key3

set

>SCAN cursor [MATCH pattern] [COUNT count]

Incrementally iterate the keys space

Available since 2.8.0.

Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..

The SCAN command and the closely related commands SSCAN, HSCAN and ZSCAN are used in order to incrementally iterate over a collection of elements.

  • SCAN iterates the set of keys in the currently selected Redis database.
  • SSCAN iterates elements of Sets types.
  • HSCAN iterates fields of Hash types and their associated values.
  • ZSCAN iterates elements of Sorted Set types and their associated scores.

Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements.

However while blocking commands like SMEMBERS are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process.

Note that SCAN, SSCAN, HSCAN and ZSCAN all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of SSCAN, HSCAN and ZSCAN the first argument is the name of the key holding the Set, Hash or Sorted Set value. The SCAN command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself.

SCAN basic usage

SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call.

An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration:

redis 127.0.0.1:6379> scan 0
1) "17"
2)  1) "key:12"
    2) "key:8"
    3) "key:4"
    4) "key:14"
    5) "key:16"
    6) "key:17"
    7) "key:15"
    8) "key:10"
    9) "key:3"
   10) "key:7"
   11) "key:1"
redis 127.0.0.1:6379> scan 17
1) "0"
2) 1) "key:5"
   2) "key:18"
   3) "key:0"
   4) "key:2"
   5) "key:19"
   6) "key:13"
   7) "key:6"
   8) "key:9"
   9) "key:11"

In the example above, the first call uses zero as a cursor, to start the iteration. The second call uses the cursor returned by the previous call as the first element of the reply, that is, 17.

As you can see the SCAN return value is an array of two values: the first value is the new cursor to use in the next call, the second value is an array of elements.

Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling SCAN until the returned cursor is 0 again is called a full iteration.

Scan guarantees

The SCAN command, and the other commands in the SCAN family, are able to provide to the user a set of guarantees associated to full iterations.

  • A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point SCAN returned it to the user.
  • A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, SCAN ensures that this element will never be returned.

However because SCAN has very little state associated (just the cursor) it has the following drawbacks:

  • A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example only using the returned elements in order to perform operations that are safe when re-applied multiple times.
  • Elements that were not constantly present in the collection during a full iteration, may be returned or not: it is undefined.

Number of elements returned at every SCAN call

SCAN family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero.

However the number of returned elements is reasonable, that is, in practical terms SCAN may return a maximum number of elements in the order of a few tens of elements when iterating a large collection, or may return all the elements of the collection in a single call when the iterated collection is small enough to be internally represented as an encoded data structure (this happens for small sets, hashes and sorted sets).

However there is a way for the user to tune the order of magnitude of the number of returned elements per call using the COUNT option.

The COUNT option

While SCAN does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of SCAN using the COUNT option. Basically with COUNT the user specified the amount of work that should be done at every call in order to retrieve elements from the collection. This is just an hint for the implementation, however generally speaking this is what you could expect most of the times from the implementation.

  • The default COUNT value is 10.
  • When iterating the key space, or a Set, Hash or Sorted Set that is big enough to be represented by an hash table, assuming no MATCH option is used, the server will usually return count or a bit more than count elements per call.
  • When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first SCAN call regardless of the COUNT value.

Important: there is no need to use the same COUNT value for every iteration. The caller is free to change the count from one iteration to the other as required, as long as the cursor passed in the next call is the one obtained in the previous call to the command.

The MATCH option

It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the KEYS command that takes a pattern as only argument.

To do so, just append the MATCH <pattern> arguments at the end of the SCAN command (it works with all the SCAN family commands).

This is an example of iteration using MATCH:

redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood
(integer) 6
redis 127.0.0.1:6379> sscan myset 0 match f*
1) "0"
2) 1) "foo"
   2) "feelsgood"
   3) "foobar"
redis 127.0.0.1:6379>

It is important to note that the MATCH filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, SCAN will likely return no elements in most iterations. An example is shown below:

redis 127.0.0.1:6379> scan 0 MATCH *11*
1) "288"
2) 1) "key:911"
redis 127.0.0.1:6379> scan 288 MATCH *11*
1) "224"
2) (empty list or set)
redis 127.0.0.1:6379> scan 224 MATCH *11*
1) "80"
2) (empty list or set)
redis 127.0.0.1:6379> scan 80 MATCH *11*
1) "176"
2) (empty list or set)
redis 127.0.0.1:6379> scan 176 MATCH *11* COUNT 1000
1) "0"
2)  1) "key:611"
    2) "key:711"
    3) "key:118"
    4) "key:117"
    5) "key:311"
    6) "key:112"
    7) "key:111"
    8) "key:110"
    9) "key:113"
   10) "key:211"
   11) "key:411"
   12) "key:115"
   13) "key:116"
   14) "key:114"
   15) "key:119"
   16) "key:811"
   17) "key:511"
   18) "key:11"
redis 127.0.0.1:6379>

As you can see most of the calls returned zero elements, but the last call where a COUNT of 1000 was used in order to force the command to do more scanning for that iteration.

Multiple parallel iterations

It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. Server side no state is taken at all.

Terminating iterations in the middle

Since there is no state server side, but the full state is captured by the cursor, the caller is free to terminate an iteration half-way without signaling this to the server in any way. An infinite number of iterations can be started and never terminated without any issue.

Calling SCAN with a corrupted cursor

Calling SCAN with a broken, negative, out of range, or otherwise invalid cursor, will result into undefined behavior but never into a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the SCAN implementation.

The only valid cursors to use are:

  • The cursor value of 0 when starting an iteration.
  • The cursor returned by the previous call to SCAN in order to continue the iteration.

Guarantee of termination

The SCAN algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into SCAN to never terminate a full iteration.

This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to SCAN and its COUNT option value compared with the rate at which the collection grows.

Return value

SCAN, SSCAN, HSCAN and ZSCAN return a two elements multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements.

  • SCAN array of elements is a list of keys.
  • SSCAN array of elements is a list of Set members.
  • HSCAN array of elements contain two elements, a field and a value, for every returned element of the Hash.
  • ZSCAN array of elements contain two elements, a member and its associated score, for every returned element of the sorted set.

Additional examples

Iteration of an Hash value.

redis 127.0.0.1:6379> hmset hash name Jack age 33
OK
redis 127.0.0.1:6379> hscan hash 0
1) "0"
2) 1) "name"
   2) "Jack"
   3) "age"
   4) "33"

transactions

>DISCARD

Discard all commands issued after MULTI

Available since 2.0.0.

Flushes all previously queued commands in a transaction and restores the connection state to normal.

If WATCH was used, DISCARD unwatches all keys.

Return value

Simple string reply: always OK.

>EXEC

Execute all commands issued after MULTI

Available since 1.2.0.

Executes all previously queued commands in a transaction and restores the connection state to normal.

When using WATCH, EXEC will execute commands only if the watched keys were not modified, allowing for a check-and-set mechanism.

Return value

Array reply: each element being the reply to each of the commands in the atomic transaction.

When using WATCH, EXEC can return a Null reply if the execution was aborted.

>MULTI

Mark the start of a transaction block

Available since 1.2.0.

Marks the start of a transaction block. Subsequent commands will be queued for atomic execution using EXEC.

Return value

Simple string reply: always OK.

>UNWATCH

Forget about all watched keys

Available since 2.2.0.

Time complexity: O(1)

Flushes all the previously watched keys for a transaction.

If you call EXEC or DISCARD, there’s no need to manually call UNWATCH.

Return value

Simple string reply: always OK.

>WATCH key [key …]

Watch the given keys to determine execution of the MULTI/EXEC block

Available since 2.2.0.

Time complexity: O(1) for every key.

Marks the given keys to be watched for conditional execution of a transaction.

Return value

Simple string reply: always OK.

scripting

>EVAL script numkeys key [key …] arg [arg …]

Execute a Lua script server side

Available since 2.6.0.

Time complexity: Depends on the script that is executed.

Introduction to EVAL

EVAL and EVALSHA are used to evaluate scripts using the Lua interpreter built into Redis starting from version 2.6.0.

The first argument of EVAL is a Lua 5.1 script. The script does not need to define a Lua function (and should not). It is just a Lua program that will run in the context of the Redis server.

The second argument of EVAL is the number of arguments that follows the script (starting from the third argument) that represent Redis key names. This arguments can be accessed by Lua using the KEYS global variable in the form of a one-based array (so KEYS[1], KEYS[2], …).

All the additional arguments should not represent key names and can be accessed by Lua using the ARGV global variable, very similarly to what happens with keys (so ARGV[1], ARGV[2], …).

The following example should clarify what stated above:

> eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second
1) "key1"
2) "key2"
3) "first"
4) "second"

Note: as you can see Lua arrays are returned as Redis multi bulk replies, that is a Redis return type that your client library will likely convert into an Array type in your programming language.

It is possible to call Redis commands from a Lua script using two different Lua functions:

  • redis.call()
  • redis.pcall()

redis.call() is similar to redis.pcall(), the only difference is that if a Redis command call will result into an error, redis.call() will raise a Lua error that in turn will force EVAL to return an error to the command caller, while redis.pcall will trap the error returning a Lua table representing the error.

The arguments of the redis.call() and redis.pcall() functions are simply all the arguments of a well formed Redis command:

> eval "return redis.call('set','foo','bar')" 0
OK

The above script actually sets the key foo to the string bar. However it violates the EVAL command semantics as all the keys that the script uses should be passed using the KEYS array, in the following way:

> eval "return redis.call('set',KEYS[1],'bar')" 1 foo
OK

The reason for passing keys in the proper way is that, before EVAL all the Redis commands could be analyzed before execution in order to establish what keys the command will operate on.

In order for this to be true for EVAL also keys must be explicit. This is useful in many ways, but especially in order to make sure Redis Cluster is able to forward your request to the appropriate cluster node (Redis Cluster is a work in progress, but the scripting feature was designed in order to play well with it). However this rule is not enforced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster.

Lua scripts can return a value, that is converted from the Lua type to the Redis protocol using a set of conversion rules.

Conversion between Lua and Redis data types

Redis return values are converted into Lua data types when Lua calls a Redis command using call() or pcall(). Similarly Lua data types are converted into the Redis protocol when a Lua script returns a value, so that scripts can control what EVAL will return to the client.

This conversion between data types is designed in a way that if a Redis type is converted into a Lua type, and then the result is converted back into a Redis type, the result is the same as of the initial value.

In other words there is a one-to-one conversion between Lua and Redis types. The following table shows you all the conversions rules:

Redis to Lua conversion table.

  • Redis integer reply -> Lua number
  • Redis bulk reply -> Lua string
  • Redis multi bulk reply -> Lua table (may have other Redis data types nested)
  • Redis status reply -> Lua table with a single ok field containing the status
  • Redis error reply -> Lua table with a single err field containing the error
  • Redis Nil bulk reply and Nil multi bulk reply -> Lua false boolean type

Lua to Redis conversion table.

  • Lua number -> Redis integer reply (the number is converted into an integer)
  • Lua string -> Redis bulk reply
  • Lua table (array) -> Redis multi bulk reply (truncated to the first nil inside the Lua array if any)
  • Lua table with a single ok field -> Redis status reply
  • Lua table with a single err field -> Redis error reply
  • Lua boolean false -> Redis Nil bulk reply.

There is an additional Lua-to-Redis conversion rule that has no corresponding Redis to Lua conversion rule:

  • Lua boolean true -> Redis integer reply with value of 1.

Also there are two important rules to note:

  • Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number if any. If you want to return a float from Lua you should return it as a string, exactly like Redis itself does (see for instance the ZSCORE command).
  • There is no simple way to have nils inside Lua arrays, this is a result of Lua table semantics, so when Redis converts a Lua array into Redis protocol the conversion is stopped if a nil is encountered.

Here are a few conversion examples:

> eval "return 10" 0
(integer) 10

> eval "return {1,2,{3,'Hello World!'}}" 0
1) (integer) 1
2) (integer) 2
3) 1) (integer) 3
   2) "Hello World!"

> eval "return redis.call('get','foo')" 0
"bar"

The last example shows how it is possible to receive the exact return value of redis.call() or redis.pcall() from Lua that would be returned if the command was called directly.

In the following example we can see how floats and arrays with nils are handled:

> eval "return {1,2,3.3333,'foo',nil,'bar'}" 0
1) (integer) 1
2) (integer) 2
3) (integer) 3
4) "foo"

As you can see 3.333 is converted into 3, and the bar string is never returned as there is a nil before.

Helper functions to return Redis types

There are two helper functions to return Redis types from Lua.

  • redis.error_reply(error_string) returns an error reply. This function simply returns the single field table with the err field set to the specified string for you.
  • redis.status_reply(status_string) returns a status reply. This function simply returns the single field table with the ok field set to the specified string for you.

There is no difference between using the helper functions or directly returning the table with the specified format, so the following two forms are equivalent:

return {err="My Error"}
return redis.error_reply("My Error")

Atomicity of scripts

Redis uses the same Lua interpreter to run all the commands. Also Redis guarantees that a script is executed in an atomic way: no other script or Redis command will be executed while a script is being executed. This semantics is very similar to the one of MULTI / EXEC. From the point of view of all the other clients the effects of a script are either still not visible or already completed.

However this also means that executing slow scripts is not a good idea. It is not hard to create fast scripts, as the script overhead is very low, but if you are going to use slow scripts you should be aware that while the script is running no other client can execute commands since the server is busy.

Error handling

As already stated, calls to redis.call() resulting in a Redis command error will stop the execution of the script and will return the error, in a way that makes it obvious that the error was generated by a script:

> del foo
(integer) 1
> lpush foo a
(integer) 1
> eval "return redis.call('get','foo')" 0
(error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value

Using the redis.pcall() command no error is raised, but an error object is returned in the format specified above (as a Lua table with an err field). The script can pass the exact error to the user by returning the error object returned by redis.pcall().

Bandwidth and EVALSHA

The EVAL command forces you to send the script body again and again. Redis does not need to recompile the script every time as it uses an internal caching mechanism, however paying the cost of the additional bandwidth may not be optimal in many contexts.

On the other hand, defining commands using a special command or via redis.conf would be a problem for a few reasons:

  • Different instances may have different versions of a command implementation.

  • Deployment is hard if there is to make sure all the instances contain a given command, especially in a distributed environment.

  • Reading an application code the full semantic could not be clear since the application would call commands defined server side.

In order to avoid these problems while avoiding the bandwidth penalty, Redis implements the EVALSHA command.

EVALSHA works exactly like EVAL, but instead of having a script as the first argument it has the SHA1 digest of a script. The behavior is the following:

  • If the server still remembers a script with a matching SHA1 digest, the script is executed.

  • If the server does not remember a script with this SHA1 digest, a special error is returned telling the client to use EVAL instead.

Example:

> set foo bar
OK
> eval "return redis.call('get','foo')" 0
"bar"
> evalsha 6b1bf486c81ceb7edf3c093f4c48582e38c0e791 0
"bar"
> evalsha ffffffffffffffffffffffffffffffffffffffff 0
(error) `NOSCRIPT` No matching script. Please use [EVAL](/commands/eval).

The client library implementation can always optimistically send EVALSHA under the hood even when the client actually calls EVAL, in the hope the script was already seen by the server. If the NOSCRIPT error is returned EVAL will be used instead.

Passing keys and arguments as additional EVAL arguments is also very useful in this context as the script string remains constant and can be efficiently cached by Redis.

Script cache semantics

Executed scripts are guaranteed to be in the script cache of a given execution of a Redis instance forever. This means that if an EVAL is performed against a Redis instance all the subsequent EVALSHA calls will succeed.

The reason why scripts can be cached for long time is that it is unlikely for a well written application to have enough different scripts to cause memory problems. Every script is conceptually like the implementation of a new command, and even a large application will likely have just a few hundred of them. Even if the application is modified many times and scripts will change, the memory used is negligible.

The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH command, which will completely flush the scripts cache removing all the scripts executed so far.

This is usually needed only when the instance is going to be instantiated for another customer or application in a cloud environment.

Also, as already mentioned, restarting a Redis instance flushes the script cache, which is not persistent. However from the point of view of the client there are only two ways to make sure a Redis instance was not restarted between two different commands.

  • The connection we have with the server is persistent and was never closed so far.
  • The client explicitly checks the runid field in the INFO command in order to make sure the server was not restarted and is still the same process.

Practically speaking, for the client it is much better to simply assume that in the context of a given connection, cached scripts are guaranteed to be there unless an administrator explicitly called the SCRIPT FLUSH command.

The fact that the user can count on Redis not removing scripts is semantically useful in the context of pipelining.

For instance an application with a persistent connection to Redis can be sure that if a script was sent once it is still in memory, so EVALSHA can be used against those scripts in a pipeline without the chance of an error being generated due to an unknown script (we’ll see this problem in detail later).

A common pattern is to call SCRIPT LOAD to load all the scripts that will appear in a pipeline, then use EVALSHA directly inside the pipeline without any need to check for errors resulting from the script hash not being recognized.

The SCRIPT command

Redis offers a SCRIPT command that can be used in order to control the scripting subsystem. SCRIPT currently accepts three different commands:

  • SCRIPT FLUSH. This command is the only way to force Redis to flush the scripts cache. It is most useful in a cloud environment where the same instance can be reassigned to a different user. It is also useful for testing client libraries’ implementations of the scripting feature.

  • SCRIPT EXISTS sha1 sha2shaN. Given a list of SHA1 digests as arguments this command returns an array of 1 or 0, where 1 means the specific SHA1 is recognized as a script already present in the scripting cache, while 0 means that a script with this SHA1 was never seen before (or at least never seen after the latest SCRIPT FLUSH command).

  • SCRIPT LOAD script. This command registers the specified script in the Redis script cache. The command is useful in all the contexts where we want to make sure that EVALSHA will not fail (for instance during a pipeline or MULTI/EXEC operation), without the need to actually execute the script.

  • SCRIPT KILL. This command is the only way to interrupt a long-running script that reaches the configured maximum execution time for scripts. The SCRIPT KILL command can only be used with scripts that did not modify the dataset during their execution (since stopping a read-only script does not violate the scripting engine’s guaranteed atomicity). See the next sections for more information about long running scripts.

Scripts as pure functions

A very important part of scripting is writing scripts that are pure functions. Scripts executed in a Redis instance are replicated on slaves by sending the script – not the resulting commands. The same happens for the Append Only File. The reason is that sending a script to another Redis instance is much faster than sending the multiple commands the script generates, so if the client is sending many scripts to the master, converting the scripts into individual commands for the slave / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via network is a lot more work for Redis compared to dispatching a command invoked by Lua scripts).

The only drawback with this approach is that scripts are required to have the following property:

  • The script always evaluates the same Redis write commands with the same arguments given the same input data set. Operations performed by the script cannot depend on any hidden (non-explicit) information or state that may change as script execution proceeds or between different executions of the script, nor can it depend on any external input from I/O devices.

Things like using the system time, calling Redis random commands like RANDOMKEY, or using Lua random number generator, could result into scripts that will not always evaluate in the same way.

In order to enforce this behavior in scripts Redis does the following:

  • Lua does not export commands to access the system time or other external state.

  • Redis will block the script with an error if a script calls a Redis command able to alter the data set after a Redis random command like RANDOMKEY, SRANDMEMBER, TIME. This means that if a script is read-only and does not modify the data set it is free to call those commands. Note that a random command does not necessarily mean a command that uses random numbers: any non-deterministic command is considered a random command (the best example in this regard is the TIME command).

  • Redis commands that may return elements in random order, like SMEMBERS (because Redis Sets are unordered) have a different behavior when called from Lua, and undergo a silent lexicographical sorting filter before returning data to Lua scripts. So redis.call(“smembers”,KEYS[1]) will always return the Set elements in the same order, while the same command invoked from normal clients may return different results even if the key contains exactly the same elements.

  • Lua pseudo random number generation functions math.random and math.randomseed are modified in order to always have the same seed every time a new script is executed. This means that calling math.random will always generate the same sequence of numbers every time a script is executed if math.randomseed is not used.

However the user is still able to write commands with random behavior using the following simple trick. Imagine I want to write a Redis script that will populate a list with N random integers.

I can start with this small Ruby program:

require 'rubygems'
require 'redis'

r = Redis.new

RandomPushScript = &lt;&lt;EOF
    local i = tonumber(ARGV[1])
    local res
    while (i > 0) do
        res = redis.call('lpush',KEYS[1],math.random())
        i = i-1
    end
    return res
EOF

r.del(:mylist)
puts r.eval(RandomPushScript,[:mylist],[10,rand(2**32)])

Every time this script executed the resulting list will have exactly the following elements:

> lrange mylist 0 -1
 1) "0.74509509873814"
 2) "0.87390407681181"
 3) "0.36876626981831"
 4) "0.6921941534114"
 5) "0.7857992587545"
 6) "0.57730350670279"
 7) "0.87046522734243"
 8) "0.09637165539729"
 9) "0.74990198051087"
10) "0.17082803611217"

In order to make it a pure function, but still be sure that every invocation of the script will result in different random elements, we can simply add an additional argument to the script that will be used in order to seed the Lua pseudo-random number generator. The new script is as follows:

RandomPushScript = &lt;&lt;EOF
    local i = tonumber(ARGV[1])
    local res
    math.randomseed(tonumber(ARGV[2]))
    while (i > 0) do
        res = redis.call('lpush',KEYS[1],math.random())
        i = i-1
    end
    return res
EOF

r.del(:mylist)
puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32))

What we are doing here is sending the seed of the PRNG as one of the arguments. This way the script output will be the same given the same arguments, but we are changing one of the arguments in every invocation, generating the random seed client-side. The seed will be propagated as one of the arguments both in the replication link and in the Append Only File, guaranteeing that the same changes will be generated when the AOF is reloaded or when the slave processes the script.

Note: an important part of this behavior is that the PRNG that Redis implements as math.random and math.randomseed is guaranteed to have the same output regardless of the architecture of the system running Redis. 32-bit, 64-bit, big-endian and little-endian systems will all produce the same output.

Global variables protection

Redis scripts are not allowed to create global variables, in order to avoid leaking data into the Lua state. If a script needs to maintain state between calls (a pretty uncommon need) it should use Redis keys instead.

When global variable access is attempted the script is terminated and EVAL returns with an error:

redis 127.0.0.1:6379> eval 'a=10' 0
(error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a'

Accessing a non existing global variable generates a similar error.

Using Lua debugging functionality or other approaches like altering the meta table used to implement global protections in order to circumvent globals protection is not hard. However it is difficult to do it accidentally. If the user messes with the Lua global state, the consistency of AOF and replication is not guaranteed: don’t do it.

Note for Lua newbies: in order to avoid using global variables in your scripts simply declare every variable you are going to use using the local keyword.

Using SELECT inside scripts

It is possible to call SELECT inside Lua scripts like with normal clients, However one subtle aspect of the behavior changes between Redis 2.8.11 and Redis 2.8.12. Before the 2.8.12 release the database selected by the Lua script was transferred to the calling script as current database. Starting from Redis 2.8.12 the database selected by the Lua script only affects the execution of the script itself, but does not modify the database selected by the client calling the script.

The semantical change between patch level releases was needed since the old behavior was inherently incompatible with the Redis replication layer and was the cause of bugs.

Available libraries

The Redis Lua interpreter loads the following Lua libraries:

  • base lib.
  • table lib.
  • string lib.
  • math lib.
  • debug lib.
  • struct lib.
  • cjson lib.
  • cmsgpack lib.
  • redis.sha1hex function.

Every Redis instance is guaranteed to have all the above libraries so you can be sure that the environment for your Redis scripts is always the same.

struct, CJSON and cmsgpack are external libraries, all the other libraries are standard Lua libraries.

struct

struct is a library for packing/unpacking structures within Lua.

Valid formats:
> - big endian
&lt; - little endian
![num] - alignment
x - pading
b/B - signed/unsigned byte
h/H - signed/unsigned short
l/L - signed/unsigned long
T   - size_t
i/In - signed/unsigned integer with size `n' (default is size of int)
cn - sequence of `n' chars (from/to a string); when packing, n==0 means
     the whole string; when unpacking, n==0 means use the previous
     read number as the string length
s - zero-terminated string
f - float
d - double
' ' - ignored

Example:

127.0.0.1:6379> eval 'return struct.pack("HH", 1, 2)' 0
"\x01\x00\x02\x00"
127.0.0.1:6379> eval 'return {struct.unpack("HH", ARGV[1])}' 0 "\x01\x00\x02\x00"
1) (integer) 1
2) (integer) 2
3) (integer) 5
127.0.0.1:6379> eval 'return struct.size("HH")' 0
(integer) 4

CJSON

The CJSON library provides extremely fast JSON manipulation within Lua.

Example:

redis 127.0.0.1:6379> eval 'return cjson.encode({["foo"]= "bar"})' 0
"{\"foo\":\"bar\"}"
redis 127.0.0.1:6379> eval 'return cjson.decode(ARGV[1])["foo"]' 0 "{\"foo\":\"bar\"}"
"bar"

cmsgpack

The cmsgpack library provides simple and fast MessagePack manipulation within Lua.

Example:

127.0.0.1:6379> eval 'return cmsgpack.pack({"foo", "bar", "baz"})' 0
"\x93\xa3foo\xa3bar\xa3baz"
127.0.0.1:6379> eval 'return cmsgpack.unpack(ARGV[1])' 0 "\x93\xa3foo\xa3bar\xa3baz
1) "foo"
2) "bar"
3) "baz"

redis.sha1hex

Perform the SHA1 of the input string.

Example:

127.0.0.1:6379> eval 'return redis.sha1hex(ARGV[1])' 0 "foo"
"0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"

Emitting Redis logs from scripts

It is possible to write to the Redis log file from Lua scripts using the redis.log function.

redis.log(loglevel,message)

loglevel is one of:

  • redis.LOG_DEBUG
  • redis.LOG_VERBOSE
  • redis.LOG_NOTICE
  • redis.LOG_WARNING

They correspond directly to the normal Redis log levels. Only logs emitted by scripting using a log level that is equal or greater than the currently configured Redis instance log level will be emitted.

The message argument is simply a string. Example:

redis.log(redis.LOG_WARNING,"Something is wrong with this script.")

Will generate the following:

[32343] 22 Mar 15:21:39 # Something is wrong with this script.

Sandbox and maximum execution time

Scripts should never try to access the external system, like the file system or any other system call. A script should only operate on Redis data and passed arguments.

Scripts are also subject to a maximum execution time (five seconds by default). This default timeout is huge since a script should usually run in under a millisecond. The limit is mostly to handle accidental infinite loops created during development.

It is possible to modify the maximum time a script can be executed with millisecond precision, either via redis.conf or using the CONFIG GET / CONFIG SET command. The configuration parameter affecting max execution time is called lua-time-limit.

When a script reaches the timeout it is not automatically terminated by Redis since this violates the contract Redis has with the scripting engine to ensure that scripts are atomic. Interrupting a script means potentially leaving the dataset with half-written data. For this reasons when a script executes for more than the specified time the following happens:

  • Redis logs that a script is running too long.
  • It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are SCRIPT KILL and SHUTDOWN NOSAVE.
  • It is possible to terminate a script that executes only read-only commands using the SCRIPT KILL command. This does not violate the scripting semantic as no data was yet written to the dataset by the script.
  • If the script already called write commands the only allowed command becomes SHUTDOWN NOSAVE that stops the server without saving the current data set on disk (basically the server is aborted).

EVALSHA in the context of pipelining

Care should be taken when executing EVALSHA in the context of a pipelined request, since even in a pipeline the order of execution of commands must be guaranteed. If EVALSHA will return a NOSCRIPT error the command can not be reissued later otherwise the order of execution is violated.

The client library implementation should take one of the following approaches:

  • Always use plain EVAL when in the context of a pipeline.

  • Accumulate all the commands to send into the pipeline, then check for EVAL commands and use the SCRIPT EXISTS command to check if all the scripts are already defined. If not, add SCRIPT LOAD commands on top of the pipeline as required, and use EVALSHA for all the EVAL calls.

>EVALSHA sha1 numkeys key [key …] arg [arg …]

Execute a Lua script server side

Available since 2.6.0.

Time complexity: Depends on the script that is executed.

Evaluates a script cached on the server side by its SHA1 digest. Scripts are cached on the server side using the SCRIPT LOAD command. The command is otherwise identical to EVAL.

>SCRIPT EXISTS script [script …]

Check existence of scripts in the script cache.

Available since 2.6.0.

Time complexity: O(N) with N being the number of scripts to check (so checking a single script is an O(1) operation).

Returns information about the existence of the scripts in the script cache.

This command accepts one or more SHA1 digests and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using SCRIPT LOAD) so that the pipelining operation can be performed solely using EVALSHA instead of EVAL to save bandwidth.

Please refer to the EVAL documentation for detailed information about Redis Lua scripting.

Return value

Array reply The command returns an array of integers that correspond to the specified SHA1 digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 is returned, otherwise 0 is returned.

redis>  SCRIPT LOAD “return 1”

ERR Unknown or disabled command 'SCRIPT'

redis>  SCRIPT EXISTS e0e1f9fabfc9d4800c877a703b823ac0578ff8db

ERR Unknown or disabled command 'SCRIPT'

>SCRIPT FLUSH

Remove all the scripts from the script cache.

Available since 2.6.0.

Time complexity: O(N) with N being the number of scripts in cache

Flush the Lua scripts cache.

Please refer to the EVAL documentation for detailed information about Redis Lua scripting.

Return value

Simple string reply

>SCRIPT KILL

Kill the script currently in execution.

Available since 2.6.0.

Time complexity: O(1)

Kills the currently executing Lua script, assuming no write operation was yet performed by the script.

This command is mainly useful to kill a script that is running for too much time(for instance because it entered an infinite loop because of a bug). The script will be killed and the client currently blocked into EVAL will see the command returning with an error.

If the script already performed write operations it can not be killed in this way because it would violate Lua script atomicity contract. In such a case only SHUTDOWN NOSAVE is able to kill the script, killing the Redis process in an hard way preventing it to persist with half-written information.

Please refer to the EVAL documentation for detailed information about Redis Lua scripting.

Return value

Simple string reply

>SCRIPT LOAD script

Load the specified Lua script into the script cache.

Available since 2.6.0.

Time complexity: O(N) with N being the length in bytes of the script body.

Load a script into the scripts cache, without executing it. After the specified command is loaded into the script cache it will be callable using EVALSHA with the correct SHA1 digest of the script, exactly like after the first successful invocation of EVAL.

The script is guaranteed to stay in the script cache forever (unless SCRIPT FLUSH is called).

The command works in the same way even if the script was already present in the script cache.

Please refer to the EVAL documentation for detailed information about Redis Lua scripting.

Return value

Bulk string reply This command returns the SHA1 digest of the script added into the script cache.

hash

>HDEL key field [field …]

Delete one or more hash fields

Available since 2.0.0.

Time complexity: O(N) where N is the number of fields to be removed.

Removes the specified fields from the hash stored at key. Specified fields that do not exist within this hash are ignored. If key does not exist, it is treated as an empty hash and this command returns 0.

Return value

Integer reply: the number of fields that were removed from the hash, not including specified but non existing fields.

History

  • = 2.4: Accepts multiple field arguments. Redis versions older than 2.4 can only remove a field per call.

    To remove multiple fields from a hash in an atomic fashion in earlier versions, use a MULTI / EXEC block.

Examples

redis>  HSET myhash field1 “foo”

(integer) 1

redis>  HDEL myhash field1

(integer) 1

redis>  HDEL myhash field2

(integer) 0

>HEXISTS key field

Determine if a hash field exists

Available since 2.0.0.

Time complexity: O(1)

Returns if field is an existing field in the hash stored at key.

Return value

Integer reply, specifically:

  • 1 if the hash contains field.
  • 0 if the hash does not contain field, or key does not exist.

Examples

redis>  HSET myhash field1 “foo”

(integer) 1

redis>  HEXISTS myhash field1

(integer) 1

redis>  HEXISTS myhash field2

(integer) 0

>HGET key field

Get the value of a hash field

Available since 2.0.0.

Time complexity: O(1)

Returns the value associated with field in the hash stored at key.

Return value

Bulk string reply: the value associated with field, or nil when field is not present in the hash or key does not exist.

Examples

redis>  HSET myhash field1 “foo”

(integer) 1

redis>  HGET myhash field1

"foo"

redis>  HGET myhash field2

(nil)

>HGETALL key

Get all the fields and values in a hash

Available since 2.0.0.

Time complexity: O(N) where N is the size of the hash.

Returns all fields and values of the hash stored at key. In the returned value, every field name is followed by its value, so the length of the reply is twice the size of the hash.

Return value

Array reply: list of fields and their values stored in the hash, or an empty list when key does not exist.

Examples

redis>  HSET myhash field1 “Hello”

(integer) 1

redis>  HSET myhash field2 “World”

(integer) 1

redis>  HGETALL myhash

1) "field1"
2) "Hello"
3) "field2"
4) "World"

>HINCRBY key field increment

Increment the integer value of a hash field by the given number

Available since 2.0.0.

Time complexity: O(1)

Increments the number stored at field in the hash stored at key by increment. If key does not exist, a new key holding a hash is created. If field does not exist the value is set to 0 before the operation is performed.

The range of values supported by HINCRBY is limited to 64 bit signed integers.

Return value

Integer reply: the value at field after the increment operation.

Examples

Since the increment argument is signed, both increment and decrement operations can be performed:

redis>  HSET myhash field 5

(integer) 1

redis>  HINCRBY myhash field 1

(integer) 6

redis>  HINCRBY myhash field -1

(integer) 5

redis>  HINCRBY myhash field -10

(integer) -5

>HINCRBYFLOAT key field increment

Increment the float value of a hash field by the given amount

Available since 2.6.0.

Time complexity: O(1)

Increment the specified field of an hash stored at key, and representing a floating point number, by the specified increment. If the field does not exist, it is set to 0 before performing the operation. An error is returned if one of the following conditions occur:

  • The field contains a value of the wrong type (not a string).
  • The current field content or the specified increment are not parsable as a double precision floating point number.

The exact behavior of this command is identical to the one of the INCRBYFLOAT command, please refer to the documentation of INCRBYFLOAT for further information.

Return value

Bulk string reply: the value of field after the increment.

Examples

redis>  HSET mykey field 10.50

(integer) 1

redis>  HINCRBYFLOAT mykey field 0.1

"10.6"

redis>  HSET mykey field 5.0e3

(integer) 0

redis>  HINCRBYFLOAT mykey field 2.0e2

"5200"

Implementation details

The command is always propagated in the replication link and the Append Only File as a HSET operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency.

>HKEYS key

Get all the fields in a hash

Available since 2.0.0.

Time complexity: O(N) where N is the size of the hash.

Returns all field names in the hash stored at key.

Return value

Array reply: list of fields in the hash, or an empty list when key does not exist.

Examples

redis>  HSET myhash field1 “Hello”

(integer) 1

redis>  HSET myhash field2 “World”

(integer) 1

redis>  HKEYS myhash

1) "field1"
2) "field2"

>HLEN key

Get the number of fields in a hash

Available since 2.0.0.

Time complexity: O(1)

Returns the number of fields contained in the hash stored at key.

Return value

Integer reply: number of fields in the hash, or 0 when key does not exist.

Examples

redis>  HSET myhash field1 “Hello”

(integer) 1

redis>  HSET myhash field2 “World”

(integer) 1

redis>  HLEN myhash

(integer) 2

>HMGET key field [field …]

Get the values of all the given hash fields

Available since 2.0.0.

Time complexity: O(N) where N is the number of fields being requested.

Returns the values associated with the specified fields in the hash stored at key.

For every field that does not exist in the hash, a nil value is returned. Because a non-existing keys are treated as empty hashes, running HMGET against a non-existing key will return a list of nil values.

Return value

Array reply: list of values associated with the given fields, in the same order as they are requested.

redis>  HSET myhash field1 “Hello”

(integer) 1

redis>  HSET myhash field2 “World”

(integer) 1

redis>  HMGET myhash field1 field2 nofield

1) "Hello"
2) "World"
3) (nil)

>HMSET key field value [field value …]

Set multiple hash fields to multiple values

Available since 2.0.0.

Time complexity: O(N) where N is the number of fields being set.

Sets the specified fields to their respective values in the hash stored at key. This command overwrites any existing fields in the hash. If key does not exist, a new key holding a hash is created.

Return value

Simple string reply

Examples

redis>  HMSET myhash field1 “Hello” field2 “World”

OK

redis>  HGET myhash field1

"Hello"

redis>  HGET myhash field2

"World"

>HSET key field value

Set the string value of a hash field

Available since 2.0.0.

Time complexity: O(1)

Sets field in the hash stored at key to value. If key does not exist, a new key holding a hash is created. If field already exists in the hash, it is overwritten.

Return value

Integer reply, specifically:

  • 1 if field is a new field in the hash and value was set.
  • 0 if field already exists in the hash and the value was updated.

Examples

redis>  HSET myhash field1 “Hello”

(integer) 1

redis>  HGET myhash field1

"Hello"

>HSETNX key field value

Set the value of a hash field, only if the field does not exist

Available since 2.0.0.

Time complexity: O(1)

Sets field in the hash stored at key to value, only if field does not yet exist. If key does not exist, a new key holding a hash is created. If field already exists, this operation has no effect.

Return value

Integer reply, specifically:

  • 1 if field is a new field in the hash and value was set.
  • 0 if field already exists in the hash and no operation was performed.

Examples

redis>  HSETNX myhash field “Hello”

(integer) 1

redis>  HSETNX myhash field “World”

(integer) 0

redis>  HGET myhash field

"Hello"

>HVALS key

Get all the values in a hash

Available since 2.0.0.

Time complexity: O(N) where N is the size of the hash.

Returns all values in the hash stored at key.

Return value

Array reply: list of values in the hash, or an empty list when key does not exist.

Examples

redis>  HSET myhash field1 “Hello”

(integer) 1

redis>  HSET myhash field2 “World”

(integer) 1

redis>  HVALS myhash

1) "Hello"
2) "World"

>HSCAN key cursor [MATCH pattern] [COUNT count]

Incrementally iterate hash fields and associated values

Available since 2.8.0.

Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..

See SCAN for HSCAN documentation.

hyperloglog

>PFADD key element [element …]

Adds the specified elements to the specified HyperLogLog.

Available since 2.8.9.

Time complexity: O(1) to add every element.

Adds all the element arguments to the HyperLogLog data structure stored at the variable name specified as first argument.

As a side effect of this command the HyperLogLog internals may be updated to reflect a different estimation of the number of unique items added so far (the cardinality of the set).

If the approximated cardinality estimated by the HyperLogLog changed after executing the command, PFADD returns 1, otherwise 0 is returned. The command automatically creates an empty HyperLogLog structure (that is, a Redis String of a specified length and with a given encoding) if the specified key does not exist.

To call the command without elements but just the variable name is valid, this will result into no operation performed if the variable already exists, or just the creation of the data structure if the key does not exist (in the latter case 1 is returned).

For an introduction to HyperLogLog data structure check the PFCOUNT command page.

Return value

Integer reply, specifically:

  • 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise.

Examples

redis>  PFADD hll a b c d e f g

(integer) 1

redis>  PFCOUNT hll

(integer) 7

>PFCOUNT key [key …]

Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s).

Available since 2.8.9.

Time complexity: O(1) with every small average constant times when called with a single key. O(N) with N being the number of keys, and much bigger constant times, when called with multiple keys.

When called with a single key, returns the approximated cardinality computed by the HyperLogLog data structure stored at the specified variable, which is 0 if the variable does not exist.

When called with multiple keys, returns the approximated cardinality of the union of the HyperLogLogs passed, by internally merging the HyperLogLogs stored at the provided keys into a temporary hyperLogLog.

The HyperLogLog data structure can be used in order to count unique elements in a set using just a small constant amount of memory, specifically 12k bytes for every HyperLogLog (plus a few bytes for the key itself).

The returned cardinality of the observed set is not exact, but approximated with a standard error of 0.81%.

For example in order to take the count of all the unique search queries performed in a day, a program needs to call PFADD every time a query is processed. The estimated number of unique queries can be retrieved with PFCOUNT at any time.

Note: as a side effect of calling this function, it is possible that the HyperLogLog is modified, since the last 8 bytes encode the latest computed cardinality for caching purposes. So PFCOUNT is technically a write command.

Return value

Integer reply, specifically:

  • The approximated number of unique elements observed via PFADD.

Examples

redis>  PFADD hll foo bar zap

(integer) 1

redis>  PFADD hll zap zap zap

(integer) 0

redis>  PFADD hll foo bar

(integer) 0

redis>  PFCOUNT hll

(integer) 3

redis>  PFADD some-other-hll 1 2 3

(integer) 1

redis>  PFCOUNT hll some-other-hll

(integer) 6

Performances

When PFCOUNT is called with a single key, performances are excellent even if in theory constant times to process a dense HyperLogLog are high. This is possible because the PFCOUNT uses caching in order to remember the cardinality previously computed, that rarely changes because most PFADD operations will not update any register. Hundreds of operations per second are possible.

When PFCOUNT is called with multiple keys, an on-the-fly merge of the HyperLogLogs is performed, which is slow, moreover the cardinality of the union can’t be cached, so when used with multiple keys PFCOUNT may take a time in the order of magnitude of the millisecond, and should be not abused.

The user should take in mind that single-key and multiple-keys executions of this command are semantically different and have different performances.

HyperLogLog representation

Redis HyperLogLogs are represented using a double representation: the sparse representation suitable for HLLs counting a small number of elements (resulting in a small number of registers set to non-zero value), and a dense representation suitable for higher cardinalities. Redis automatically switches from the sparse to the dense representation when needed.

The sparse representation uses a run-length encoding optimized to store efficiently a big number of registers set to zero. The dense representation is a Redis string of 12288 bytes in order to store 16384 6-bit counters. The need for the double representation comes from the fact that using 12k (which is the dense representation memory requirement) to encode just a few registers for smaller cardinalities is extremely suboptimal.

Both representations are prefixed with a 16 bytes header, that includes a magic, an encoding / version fiend, and the cached cardinality estimation computed, stored in little endian format (the most significant bit is 1 if the estimation is invalid since the HyperLogLog was updated since the cardinality was computed).

The HyperLogLog, being a Redis string, can be retrieved with GET and restored with SET. Calling PFADD, PFCOUNT or PFMERGE commands with a corrupted HyperLogLog is never a problem, it may return random values but does not affect the stability of the server. Most of the times when corrupting a sparse representation, the server recognizes the corruption and returns an error.

The representation is neutral from the point of view of the processor word size and endianess, so the same representation is used by 32 bit and 64 bit processor, big endian or little endian.

More details about the Redis HyperLogLog implementation can be found in this blog post. The source code of the implementation in the hyperloglog.c file is also easy to read and understand, and includes a full specification for the exact encoding used for the sparse and dense representations.

>PFMERGE destkey sourcekey [sourcekey …]

Merge N different HyperLogLogs into a single one.

Available since 2.8.9.

Time complexity: O(N) to merge N HyperLogLogs, but with high constant times.

Merge multiple HyperLogLog values into an unique value that will approximate the cardinality of the union of the observed Sets of the source HyperLogLog structures.

The computed merged HyperLogLog is set to the destination variable, which is created if does not exist (defauling to an empty HyperLogLog).

Return value

Simple string reply: The command just returns OK.

Examples

redis>  PFADD hll1 foo bar zap a

(integer) 1

redis>  PFADD hll2 a b c foo

(integer) 1

redis>  PFMERGE hll3 hll1 hll2

OK

redis>  PFCOUNT hll3

(integer) 6

pubsub

>PSUBSCRIBE pattern [pattern …]

Listen for messages published to channels matching the given patterns

Available since 2.0.0.

Time complexity: O(N) where N is the number of patterns the client is already subscribed to.

Subscribes the client to the given patterns.

Supported glob-style patterns:

  • h?llo subscribes to hello, hallo and hxllo
  • h*llo subscribes to hllo and heeeello
  • h[ae]llo subscribes to hello and hallo, but not hillo

Use \ to escape special characters if you want to match them verbatim.

>PUBSUB subcommand [argument [argument …]]

Inspect the state of the Pub/Sub subsystem

Available since 2.8.0.

Time complexity: O(N) for the CHANNELS subcommand, where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns). O(N) for the NUMSUB subcommand, where N is the number of requested channels. O(1) for the NUMPAT subcommand.

The PUBSUB command is an introspection command that allows to inspect the state of the Pub/Sub subsystem. It is composed of subcommands that are documented separately. The general form is:

PUBSUB &lt;subcommand> ... args ...

PUBSUB CHANNELS [pattern]

Lists the currently active channels. An active channel is a Pub/Sub channel with one or more subscribers (not including clients subscribed to patterns).

If no pattern is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed.

Return value

Array reply: a list of active channels, optionally matching the specified pattern.

PUBSUB NUMSUB [channel-1 … channel-N]

Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels.

Return value

Array reply: a list of channels and number of subscribers for every channel. The format is channel, count, channel, count, …, so the list is flat. The order in which the channels are listed is the same as the order of the channels specified in the command call.

Note that it is valid to call this command without channels. In this case it will just return an empty list.

PUBSUB NUMPAT

Returns the number of subscriptions to patterns (that are performed using the PSUBSCRIBE command). Note that this is not just the count of clients subscribed to patterns but the total number of patterns all the clients are subscribed to.

Return value

Integer reply: the number of patterns all the clients are subscribed to.

>PUBLISH channel message

Post a message to a channel

Available since 2.0.0.

Time complexity: O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).

Posts a message to the given channel.

Return value

Integer reply: the number of clients that received the message.

>PUNSUBSCRIBE [pattern [pattern …]]

Stop listening for messages posted to channels matching the given patterns

Available since 2.0.0.

Time complexity: O(N+M) where N is the number of patterns the client is already subscribed and M is the number of total patterns subscribed in the system (by any client).

Unsubscribes the client from the given patterns, or from all of them if none is given.

When no patterns are specified, the client is unsubscribed from all the previously subscribed patterns. In this case, a message for every unsubscribed pattern will be sent to the client.

>SUBSCRIBE channel [channel …]

Listen for messages published to the given channels

Available since 2.0.0.

Time complexity: O(N) where N is the number of channels to subscribe to.

Subscribes the client to the specified channels.

Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE and PUNSUBSCRIBE commands.

>UNSUBSCRIBE [channel [channel …]]

Stop listening for messages posted to the given channels

Available since 2.0.0.

Time complexity: O(N) where N is the number of clients already subscribed to a channel.

Unsubscribes the client from the given channels, or from all of them if none is given.

When no channels are specified, the client is unsubscribed from all the previously subscribed channels. In this case, a message for every unsubscribed channel will be sent to the client.

set

>SADD key member [member …]

Add one or more members to a set

Available since 1.0.0.

Time complexity: O(N) where N is the number of members to be added.

Add the specified members to the set stored at key. Specified members that are already a member of this set are ignored. If key does not exist, a new set is created before adding the specified members.

An error is returned when the value stored at key is not a set.

Return value

Integer reply: the number of elements that were added to the set, not including all the elements already present into the set.

History

  • = 2.4: Accepts multiple member arguments. Redis versions before 2.4 are only able to add a single member per call.

Examples

redis>  SADD myset “Hello”

(integer) 1

redis>  SADD myset “World”

(integer) 1

redis>  SADD myset “World”

(integer) 0

redis>  SMEMBERS myset

1) "World"
2) "Hello"

>SCARD key

Get the number of members in a set

Available since 1.0.0.

Time complexity: O(1)

Returns the set cardinality (number of elements) of the set stored at key.

Return value

Integer reply: the cardinality (number of elements) of the set, or 0 if key does not exist.

Examples

redis>  SADD myset “Hello”

(integer) 1

redis>  SADD myset “World”

(integer) 1

redis>  SCARD myset

(integer) 2

>SDIFF key [key …]

Subtract multiple sets

Available since 1.0.0.

Time complexity: O(N) where N is the total number of elements in all given sets.

Returns the members of the set resulting from the difference between the first set and all the successive sets.

For example:

key1 = {a,b,c,d}
key2 = {c}
key3 = {a,c,e}
SDIFF key1 key2 key3 = {b,d}

Keys that do not exist are considered to be empty sets.

Return value

Array reply: list with members of the resulting set.

Examples

redis>  SADD key1 “a”

(integer) 1

redis>  SADD key1 “b”

(integer) 1

redis>  SADD key1 “c”

(integer) 1

redis>  SADD key2 “c”

(integer) 1

redis>  SADD key2 “d”

(integer) 1

redis>  SADD key2 “e”

(integer) 1

redis>  SDIFF key1 key2

1) "a"
2) "b"

>SDIFFSTORE destination key [key …]

Subtract multiple sets and store the resulting set in a key

Available since 1.0.0.

Time complexity: O(N) where N is the total number of elements in all given sets.

This command is equal to SDIFF, but instead of returning the resulting set, it is stored in destination.

If destination already exists, it is overwritten.

Return value

Integer reply: the number of elements in the resulting set.

Examples

redis>  SADD key1 “a”

(integer) 1

redis>  SADD key1 “b”

(integer) 1

redis>  SADD key1 “c”

(integer) 1

redis>  SADD key2 “c”

(integer) 1

redis>  SADD key2 “d”

(integer) 1

redis>  SADD key2 “e”

(integer) 1

redis>  SDIFFSTORE key key1 key2

(integer) 2

redis>  SMEMBERS key

1) "a"
2) "b"

>SINTER key [key …]

Intersect multiple sets

Available since 1.0.0.

Time complexity: O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.

Returns the members of the set resulting from the intersection of all the given sets.

For example:

key1 = {a,b,c,d}
key2 = {c}
key3 = {a,c,e}
SINTER key1 key2 key3 = {c}

Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set).

Return value

Array reply: list with members of the resulting set.

Examples

redis>  SADD key1 “a”

(integer) 1

redis>  SADD key1 “b”

(integer) 1

redis>  SADD key1 “c”

(integer) 1

redis>  SADD key2 “c”

(integer) 1

redis>  SADD key2 “d”

(integer) 1

redis>  SADD key2 “e”

(integer) 1

redis>  SINTER key1 key2

1) "c"

>SINTERSTORE destination key [key …]

Intersect multiple sets and store the resulting set in a key

Available since 1.0.0.

Time complexity: O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.

This command is equal to SINTER, but instead of returning the resulting set, it is stored in destination.

If destination already exists, it is overwritten.

Return value

Integer reply: the number of elements in the resulting set.

Examples

redis>  SADD key1 “a”

(integer) 1

redis>  SADD key1 “b”

(integer) 1

redis>  SADD key1 “c”

(integer) 1

redis>  SADD key2 “c”

(integer) 1

redis>  SADD key2 “d”

(integer) 1

redis>  SADD key2 “e”

(integer) 1

redis>  SINTERSTORE key key1 key2

(integer) 1

redis>  SMEMBERS key

1) "c"

>SISMEMBER key member

Determine if a given value is a member of a set

Available since 1.0.0.

Time complexity: O(1)

Returns if member is a member of the set stored at key.

Return value

Integer reply, specifically:

  • 1 if the element is a member of the set.
  • 0 if the element is not a member of the set, or if key does not exist.

Examples

redis>  SADD myset “one”

(integer) 1

redis>  SISMEMBER myset “one”

(integer) 1

redis>  SISMEMBER myset “two”

(integer) 0

>SMEMBERS key

Get all the members in a set

Available since 1.0.0.

Time complexity: O(N) where N is the set cardinality.

Returns all the members of the set value stored at key.

This has the same effect as running SINTER with one argument key.

Return value

Array reply: all elements of the set.

Examples

redis>  SADD myset “Hello”

(integer) 1

redis>  SADD myset “World”

(integer) 1

redis>  SMEMBERS myset

1) "World"
2) "Hello"

>SMOVE source destination member

Move a member from one set to another

Available since 1.0.0.

Time complexity: O(1)

Move member from the set at source to the set at destination. This operation is atomic. In every given moment the element will appear to be a member of source or destination for other clients.

If the source set does not exist or does not contain the specified element, no operation is performed and 0 is returned. Otherwise, the element is removed from the source set and added to the destination set. When the specified element already exists in the destination set, it is only removed from the source set.

An error is returned if source or destination does not hold a set value.

Return value

Integer reply, specifically:

  • 1 if the element is moved.
  • 0 if the element is not a member of source and no operation was performed.

Examples

redis>  SADD myset “one”

(integer) 1

redis>  SADD myset “two”

(integer) 1

redis>  SADD myotherset “three”

(integer) 1

redis>  SMOVE myset myotherset “two”

(integer) 1

redis>  SMEMBERS myset

1) "one"

redis>  SMEMBERS myotherset

1) "two"
2) "three"

>SPOP key

Remove and return a random member from a set

Available since 1.0.0.

Time complexity: O(1)

Removes and returns a random element from the set value stored at key.

This operation is similar to SRANDMEMBER, that returns a random element from a set but does not remove it.

Return value

Bulk string reply: the removed element, or nil when key does not exist.

Examples

redis>  SADD myset “one”

(integer) 1

redis>  SADD myset “two”

(integer) 1

redis>  SADD myset “three”

(integer) 1

redis>  SPOP myset

"one"

redis>  SMEMBERS myset

1) "three"
2) "two"

>SRANDMEMBER key [count]

Get one or multiple random members from a set

Available since 1.0.0.

Time complexity: Without the count argument O(1), otherwise O(N) where N is the absolute value of the passed count.

When called with just the key argument, return a random element from the set value stored at key.

Starting from Redis version 2.6, when called with the additional count argument, return an array of count distinct elements if count is positive. If called with a negative count the behavior changes and the command is allowed to return the same element multiple times. In this case the numer of returned elements is the absolute value of the specified count.

When called with just the key argument, the operation is similar to SPOP, however while SPOP also removes the randomly selected element from the set, SRANDMEMBER will just return a random element without altering the original set in any way.

Return value

Bulk string reply: without the additional count argument the command returns a Bulk Reply with the randomly selected element, or nil when key does not exist. Array reply: when the additional count argument is passed the command returns an array of elements, or an empty array when key does not exist.

Examples

redis>  SADD myset one two three

(integer) 3

redis>  SRANDMEMBER myset

"two"

redis>  SRANDMEMBER myset 2

1) "one"
2) "two"

redis>  SRANDMEMBER myset -5

1) "three"
2) "two"
3) "one"
4) "three"
5) "one"

Specification of the behavior when count is passed

When a count argument is passed and is positive, the elements are returned as if every selected element is removed from the set (like the extraction of numbers in the game of Bingo). However elements are not removed from the Set. So basically:

  • No repeated elements are returned.
  • If count is bigger than the number of elements inside the Set, the command will only return the whole set without additional elements.

When instead the count is negative, the behavior changes and the extraction happens as if you put the extracted element inside the bag again after every extraction, so repeated elements are possible, and the number of elements requested is always returned as we can repeat the same elements again and again, with the exception of an empty Set (non existing key) that will always produce an empty array as a result.

Distribution of returned elements

The distribution of the returned elements is far from perfect when the number of elements in the set is small, this is due to the fact that we used an approximated random element function that does not really guarantees good distribution.

The algorithm used, that is implemented inside dict.c, samples the hash table buckets to find a non-empty one. Once a non empty bucket is found, since we use chaining in our hash table implementation, the number of elements inside the bucked is checked and a random element is selected.

This means that if you have two non-empty buckets in the entire hash table, and one has three elements while one has just one, the element that is alone in its bucket will be returned with much higher probability.

>SREM key member [member …]

Remove one or more members from a set

Available since 1.0.0.

Time complexity: O(N) where N is the number of members to be removed.

Remove the specified members from the set stored at key. Specified members that are not a member of this set are ignored. If key does not exist, it is treated as an empty set and this command returns 0.

An error is returned when the value stored at key is not a set.

Return value

Integer reply: the number of members that were removed from the set, not including non existing members.

History

  • = 2.4: Accepts multiple member arguments. Redis versions older than 2.4 can only remove a set member per call.

Examples

redis>  SADD myset “one”

(integer) 1

redis>  SADD myset “two”

(integer) 1

redis>  SADD myset “three”

(integer) 1

redis>  SREM myset “one”

(integer) 1

redis>  SREM myset “four”

(integer) 0

redis>  SMEMBERS myset

1) "three"
2) "two"

>SUNION key [key …]

Add multiple sets

Available since 1.0.0.

Time complexity: O(N) where N is the total number of elements in all given sets.

Returns the members of the set resulting from the union of all the given sets.

For example:

key1 = {a,b,c,d}
key2 = {c}
key3 = {a,c,e}
SUNION key1 key2 key3 = {a,b,c,d,e}

Keys that do not exist are considered to be empty sets.

Return value

Array reply: list with members of the resulting set.

Examples

redis>  SADD key1 “a”

(integer) 1

redis>  SADD key1 “b”

(integer) 1

redis>  SADD key1 “c”

(integer) 1

redis>  SADD key2 “c”

(integer) 1

redis>  SADD key2 “d”

(integer) 1

redis>  SADD key2 “e”

(integer) 1

redis>  SUNION key1 key2

1) "a"
2) "c"
3) "b"
4) "d"
5) "e"

>SUNIONSTORE destination key [key …]

Add multiple sets and store the resulting set in a key

Available since 1.0.0.

Time complexity: O(N) where N is the total number of elements in all given sets.

This command is equal to SUNION, but instead of returning the resulting set, it is stored in destination.

If destination already exists, it is overwritten.

Return value

Integer reply: the number of elements in the resulting set.

Examples

redis>  SADD key1 “a”

(integer) 1

redis>  SADD key1 “b”

(integer) 1

redis>  SADD key1 “c”

(integer) 1

redis>  SADD key2 “c”

(integer) 1

redis>  SADD key2 “d”

(integer) 1

redis>  SADD key2 “e”

(integer) 1

redis>  SINTERSTORE key key1 key2

(integer) 1

redis>  SMEMBERS key

1) "c"

>SSCAN key cursor [MATCH pattern] [COUNT count]

Incrementally iterate Set elements

Available since 2.8.0.

Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..

See SCAN for SSCAN documentation.

sorted_set

>ZADD key score member [score member …]

Add one or more members to a sorted set, or update its score if it already exists

Available since 1.2.0.

Time complexity: O(log(N)) where N is the number of elements in the sorted set.

Adds all the specified members with the specified scores to the sorted set stored at key. It is possible to specify multiple score / member pairs. If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering.

If key does not exist, a new sorted set with the specified members as sole members is created, like if the sorted set was empty. If the key exists but does not hold a sorted set, an error is returned.

The score values should be the string representation of a double precision floating point number. +inf and -inf values are valid values as well.

Sorted sets 101

Sorted sets are sorted by their score in an ascending way. The same element only exists a single time, no repeated elements are permitted. The score can be modified both by ZADD that will update the element score, and as a side effect, its position on the sorted set, and by ZINCRBY that can be used in order to update the score relatively to its previous value.

The current score of an element can be retrieved using the ZSCORE command, that can also be used to verify if an element already exists or not.

For an introduction to sorted sets, see the data types page on sorted sets.

Elements with the same score

While the same element can’t be repeated in a sorted set since every element is unique, it is possible to add multiple different elements having the same score. When multiple elements have the same score, they are ordered lexicographically (they are still ordered by score as a first key, however, locally, all the elements with the same score are relatively ordered lexicographically).

The lexicographic ordering used is binary, it compares strings as array of bytes.

If the user inserts all the elements in a sorted set with the same score (for example 0), all the elements of the sorted set are sorted lexicographically, and range queries on elements are possible using the command ZRANGEBYLEX (Note: it is also possible to query sorted sets by range of scores using ZRANGEBYSCORE).

Return value

Integer reply, specifically:

  • The number of elements added to the sorted sets, not including elements already existing for which the score was updated.

History

  • = 2.4: Accepts multiple elements. In Redis versions older than 2.4 it was possible to add or update a single member per call.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 1 “uno”

(integer) 1

redis>  ZADD myzset 2 “two” 3 “three”

(integer) 2

redis>  ZRANGE myzset 0 -1 WITHSCORES

1) "one"
2) "1"
3) "uno"
4) "1"
5) "two"
6) "2"
7) "three"
8) "3"

>ZCARD key

Get the number of members in a sorted set

Available since 1.2.0.

Time complexity: O(1)

Returns the sorted set cardinality (number of elements) of the sorted set stored at key.

Return value

Integer reply: the cardinality (number of elements) of the sorted set, or 0 if key does not exist.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZCARD myzset

(integer) 2

>ZCOUNT key min max

Count the members in a sorted set with scores within the given values

Available since 2.0.0.

Time complexity: O(log(N)) with N being the number of elements in the sorted set.

Returns the number of elements in the sorted set at key with a score between min and max.

The min and max arguments have the same semantic as described for ZRANGEBYSCORE.

Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see ZRANK) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range.

Return value

Integer reply: the number of elements in the specified score range.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZCOUNT myzset -inf +inf

(integer) 3

redis>  ZCOUNT myzset (1 3

(integer) 2

>ZINCRBY key increment member

Increment the score of a member in a sorted set

Available since 1.2.0.

Time complexity: O(log(N)) where N is the number of elements in the sorted set.

Increments the score of member in the sorted set stored at key by increment. If member does not exist in the sorted set, it is added with increment as its score (as if its previous score was 0.0). If key does not exist, a new sorted set with the specified member as its sole member is created.

An error is returned when key exists but does not hold a sorted set.

The score value should be the string representation of a numeric value, and accepts double precision floating point numbers. It is possible to provide a negative value to decrement the score.

Return value

Bulk string reply: the new score of member (a double precision floating point number), represented as string.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZINCRBY myzset 2 “one”

"3"

redis>  ZRANGE myzset 0 -1 WITHSCORES

1) "two"
2) "2"
3) "one"
4) "3"

>ZINTERSTORE destination numkeys key [key …] [WEIGHTS weight [weight …]] [AGGREGATE SUM|MIN|MAX]

Intersect multiple sorted sets and store the resulting sorted set in a new key

Available since 2.0.0.

Time complexity: O(NK)+O(Mlog(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.

Computes the intersection of numkeys sorted sets given by the specified keys, and stores the result in destination. It is mandatory to provide the number of input keys (numkeys) before passing the input keys and the other (optional) arguments.

By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Because intersection requires an element to be a member of every given sorted set, this results in the score of every element in the resulting sorted set to be equal to the number of input sorted sets.

For a description of the WEIGHTS and AGGREGATE options, see ZUNIONSTORE.

If destination already exists, it is overwritten.

Return value

Integer reply: the number of elements in the resulting sorted set at destination.

Examples

redis>  ZADD zset1 1 “one”

(integer) 1

redis>  ZADD zset1 2 “two”

(integer) 1

redis>  ZADD zset2 1 “one”

(integer) 1

redis>  ZADD zset2 2 “two”

(integer) 1

redis>  ZADD zset2 3 “three”

(integer) 1

redis>  ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3

(integer) 2

redis>  ZRANGE out 0 -1 WITHSCORES

1) "one"
2) "5"
3) "two"
4) "10"

>ZLEXCOUNT key min max

Count the number of members in a sorted set between a given lexicographical range

Available since 2.8.9.

Time complexity: O(log(N)) with N being the number of elements in the sorted set.

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns the number of elements in the sorted set at key with a value between min and max.

The min and max arguments have the same meaning as described for ZRANGEBYLEX.

Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see ZRANK) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range.

Return value

Integer reply: the number of elements in the specified score range.

Examples

redis>  ZADD myzset 0 a 0 b 0 c 0 d 0 e

(integer) 5

redis>  ZADD myzset 0 f 0 g

(integer) 2

redis>  ZLEXCOUNT myzset - +

(integer) 7

redis>  ZLEXCOUNT myzset [b [f

(integer) 5

>ZRANGE key start stop [WITHSCORES]

Return a range of members in a sorted set, by index

Available since 1.2.0.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.

Returns the specified range of elements in the sorted set stored at key. The elements are considered to be ordered from the lowest to the highest score. Lexicographical order is used for elements with equal score.

See ZREVRANGE when you need the elements ordered from highest to lowest score (and descending lexicographical order for elements with equal score).

Both start and stop are zero-based indexes, where 0 is the first element, 1 is the next element and so on. They can also be negative numbers indicating offsets from the end of the sorted set, with -1 being the last element of the sorted set, -2 the penultimate element and so on.

Out of range indexes will not produce an error. If start is larger than the largest index in the sorted set, or start > stop, an empty list is returned. If stop is larger than the end of the sorted set Redis will treat it like it is the last element of the sorted set.

It is possible to pass the WITHSCORES option in order to return the scores of the elements together with the elements. The returned list will contain value1,score1,…,valueN,scoreN instead of value1,…,valueN. Client libraries are free to return a more appropriate data type (suggestion: an array with (value, score) arrays/tuples).

Return value

Array reply: list of elements in the specified range (optionally with their scores).

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZRANGE myzset 0 -1

1) "one"
2) "two"
3) "three"

redis>  ZRANGE myzset 2 3

1) "three"

redis>  ZRANGE myzset -2 -1

1) "two"
2) "three"

>ZRANGEBYLEX key min max [LIMIT offset count]

Return a range of members in a sorted set, by lexicographical range

Available since 2.8.9.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at key with a value between min and max.

If the elements in the sorted set have different scores, the returned elements are unspecified.

The elements are considered to be ordered from lower to higher strings as compared byte-by-byte using the memcmp() C function. Longer strings are considered greater than shorter strings if the common part is identical.

The optional LIMIT argument can be used to only get a range of the matching elements (similar to SELECT LIMIT offset, count in SQL). Keep in mind that if offset is large, the sorted set needs to be traversed for offset elements before getting to the elements to return, which can add up to O(N) time complexity.

How to specify intervals

Valid start and stop must start with ( or [, in order to specify if the range item is respectively exclusive or inclusive. The special values of + or - for start and stop have the special meaning or positively infinite and negatively infinite strings, so for instance the command ZRANGEBYLEX myzset - + is guaranteed to return all the elements in the sorted set, if all the elements have the same score.

Details on strings comparison

Strings are compared as binary array of bytes. Because of how the ASCII character set is specified, this means that usually this also have the effect of comparing normal ASCII characters in an obvious dictionary way. However this is not true if non plain ASCII strings are used (for example utf8 strings).

However the user can apply a transformation to the encoded string so that the first part of the element inserted in the sorted set will compare as the user requires for the specific application. For example if I want to add strings that will be compared in a case-insensitive way, but I still want to retrieve the real case when querying, I can add strings in the following way:

ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap

Because of the first normalized part in every element (before the colon character), we are forcing a given comparison, however after the range is queries using ZRANGEBYLEX the application can display to the user the second part of the string, after the colon.

The binary nature of the comparison allows to use sorted sets as a general purpose index, for example the first part of the element can be a 64 bit big endian number: since big endian numbers have the most significant bytes in the initial positions, the binary comparison will match the numerical comparison of the numbers. This can be used in order to implement range queries on 64 bit values. As in the example below, after the first 8 bytes we can store the value of the element we are actually indexing.

Return value

Array reply: list of elements in the specified score range.

Examples

redis>  ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g

(integer) 7

redis>  ZRANGEBYLEX myzset - [c

1) "a"
2) "b"
3) "c"

redis>  ZRANGEBYLEX myzset - (c

1) "a"
2) "b"

redis>  ZRANGEBYLEX myzset [aaa (g

1) "b"
2) "c"
3) "d"
4) "e"
5) "f"

>ZREVRANGEBYLEX key max min [LIMIT offset count]

Return a range of members in a sorted set, by lexicographical range, ordered from higher to lower strings.

Available since 2.9.9.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at key with a value between max and min.

Apart from the reversed ordering, ZREVRANGEBYLEX is similar to ZRANGEBYLEX.

Return value

Array reply: list of elements in the specified score range.

Examples

redis>  ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g

(integer) 7

redis>  ZREVRANGEBYLEX myzset [c -

ERR Unknown or disabled command 'ZREVRANGEBYLEX'

redis>  ZREVRANGEBYLEX myzset (c -

ERR Unknown or disabled command 'ZREVRANGEBYLEX'

redis>  ZREVRANGEBYLEX myzset (g [aaa

ERR Unknown or disabled command 'ZREVRANGEBYLEX'

>ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT offset count]

Return a range of members in a sorted set, by score

Available since 1.0.5.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).

Returns all the elements in the sorted set at key with a score between min and max (including elements with score equal to min or max). The elements are considered to be ordered from low to high scores.

The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not involve further computation).

The optional LIMIT argument can be used to only get a range of the matching elements (similar to SELECT LIMIT offset, count in SQL). Keep in mind that if offset is large, the sorted set needs to be traversed for offset elements before getting to the elements to return, which can add up to O(N) time complexity.

The optional WITHSCORES argument makes the command return both the element and its score, instead of the element alone. This option is available since Redis 2.0.

Exclusive intervals and infinity

min and max can be -inf and +inf, so that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score.

By default, the interval specified by min and max is closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score with the character (. For example:

ZRANGEBYSCORE zset (1 5

Will return all elements with 1 < score <= 5 while:

ZRANGEBYSCORE zset (5 (10

Will return all the elements with 5 < score < 10 (5 and 10 excluded).

Return value

Array reply: list of elements in the specified score range (optionally with their scores).

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZRANGEBYSCORE myzset -inf +inf

1) "one"
2) "two"
3) "three"

redis>  ZRANGEBYSCORE myzset 1 2

1) "one"
2) "two"

redis>  ZRANGEBYSCORE myzset (1 2

1) "two"

redis>  ZRANGEBYSCORE myzset (1 (2

(empty list or set)

>ZRANK key member

Determine the index of a member in a sorted set

Available since 2.0.0.

Time complexity: O(log(N))

Returns the rank of member in the sorted set stored at key, with the scores ordered from low to high. The rank (or index) is 0-based, which means that the member with the lowest score has rank 0.

Use ZREVRANK to get the rank of an element with the scores ordered from high to low.

Return value

  • If member exists in the sorted set, Integer reply: the rank of member.
  • If member does not exist in the sorted set or key does not exist, Bulk string reply: nil.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZRANK myzset “three”

(integer) 2

redis>  ZRANK myzset “four”

(nil)

>ZREM key member [member …]

Remove one or more members from a sorted set

Available since 1.2.0.

Time complexity: O(M*log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed.

Removes the specified members from the sorted set stored at key. Non existing members are ignored.

An error is returned when key exists and does not hold a sorted set.

Return value

Integer reply, specifically:

  • The number of members removed from the sorted set, not including non existing members.

History

  • = 2.4: Accepts multiple elements. In Redis versions older than 2.4 it was possible to remove a single member per call.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZREM myzset “two”

(integer) 1

redis>  ZRANGE myzset 0 -1 WITHSCORES

1) "one"
2) "1"
3) "three"
4) "3"

>ZREMRANGEBYLEX key min max

Remove all members in a sorted set between the given lexicographical range

Available since 2.8.9.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command removes all elements in the sorted set stored at key between the lexicographical range specified by min and max.

The meaining of min and max are the same of the ZRANGEBYLEX command. Similarly, this command actually returns the same elements that ZRANGEBYLEX would return if called with the same min and max arguments.

Return value

Integer reply: the number of elements removed.

Examples

redis>  ZADD myzset 0 aaaa 0 b 0 c 0 d 0 e

(integer) 5

redis>  ZADD myzset 0 foo 0 zap 0 zip 0 ALPHA 0 alpha

(integer) 5

redis>  ZRANGE myzset 0 -1

1) "ALPHA"
 2) "aaaa"
 3) "alpha"
 4) "b"
 5) "c"
 6) "d"
 7) "e"
 8) "foo"
 9) "zap"
10) "zip"

redis>  ZREMRANGEBYLEX myzset [alpha [omega

(integer) 6

redis>  ZRANGE myzset 0 -1

1) "ALPHA"
2) "aaaa"
3) "zap"
4) "zip"

>ZREMRANGEBYRANK key start stop

Remove all members in a sorted set within the given indexes

Available since 2.0.0.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.

Removes all elements in the sorted set stored at key with rank between start and stop. Both start and stop are 0 -based indexes with 0 being the element with the lowest score. These indexes can be negative numbers, where they indicate offsets starting at the element with the highest score. For example: -1 is the element with the highest score, -2 the element with the second highest score and so forth.

Return value

Integer reply: the number of elements removed.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZREMRANGEBYRANK myzset 0 1

(integer) 2

redis>  ZRANGE myzset 0 -1 WITHSCORES

1) "three"
2) "3"

>ZREMRANGEBYSCORE key min max

Remove all members in a sorted set within the given scores

Available since 1.2.0.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.

Removes all elements in the sorted set stored at key with a score between min and max (inclusive).

Since version 2.1.6, min and max can be exclusive, following the syntax of ZRANGEBYSCORE.

Return value

Integer reply: the number of elements removed.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZREMRANGEBYSCORE myzset -inf (2

(integer) 1

redis>  ZRANGE myzset 0 -1 WITHSCORES

1) "two"
2) "2"
3) "three"
4) "3"

>ZREVRANGE key start stop [WITHSCORES]

Return a range of members in a sorted set, by index, with scores ordered from high to low

Available since 1.2.0.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.

Returns the specified range of elements in the sorted set stored at key. The elements are considered to be ordered from the highest to the lowest score. Descending lexicographical order is used for elements with equal score.

Apart from the reversed ordering, ZREVRANGE is similar to ZRANGE.

Return value

Array reply: list of elements in the specified range (optionally with their scores).

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZREVRANGE myzset 0 -1

1) "three"
2) "two"
3) "one"

redis>  ZREVRANGE myzset 2 3

1) "one"

redis>  ZREVRANGE myzset -2 -1

1) "two"
2) "one"

>ZREVRANGEBYSCORE key max min [WITHSCORES] [LIMIT offset count]

Return a range of members in a sorted set, by score, with scores ordered from high to low

Available since 2.2.0.

Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).

Returns all the elements in the sorted set at key with a score between max and min (including elements with score equal to max or min). In contrary to the default ordering of sorted sets, for this command the elements are considered to be ordered from high to low scores.

The elements having the same score are returned in reverse lexicographical order.

Apart from the reversed ordering, ZREVRANGEBYSCORE is similar to ZRANGEBYSCORE.

Return value

Array reply: list of elements in the specified score range (optionally with their scores).

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZREVRANGEBYSCORE myzset +inf -inf

1) "three"
2) "two"
3) "one"

redis>  ZREVRANGEBYSCORE myzset 2 1

1) "two"
2) "one"

redis>  ZREVRANGEBYSCORE myzset 2 (1

1) "two"

redis>  ZREVRANGEBYSCORE myzset (2 (1

(empty list or set)

>ZREVRANK key member

Determine the index of a member in a sorted set, with scores ordered from high to low

Available since 2.0.0.

Time complexity: O(log(N))

Returns the rank of member in the sorted set stored at key, with the scores ordered from high to low. The rank (or index) is 0-based, which means that the member with the highest score has rank 0.

Use ZRANK to get the rank of an element with the scores ordered from low to high.

Return value

  • If member exists in the sorted set, Integer reply: the rank of member.
  • If member does not exist in the sorted set or key does not exist, Bulk string reply: nil.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZADD myzset 2 “two”

(integer) 1

redis>  ZADD myzset 3 “three”

(integer) 1

redis>  ZREVRANK myzset “one”

(integer) 2

redis>  ZREVRANK myzset “four”

(nil)

>ZSCORE key member

Get the score associated with the given member in a sorted set

Available since 1.2.0.

Time complexity: O(1)

Returns the score of member in the sorted set at key.

If member does not exist in the sorted set, or key does not exist, nil is returned.

Return value

Bulk string reply: the score of member (a double precision floating point number), represented as string.

Examples

redis>  ZADD myzset 1 “one”

(integer) 1

redis>  ZSCORE myzset “one”

"1"

>ZUNIONSTORE destination numkeys key [key …] [WEIGHTS weight [weight …]] [AGGREGATE SUM|MIN|MAX]

Add multiple sorted sets and store the resulting sorted set in a new key

Available since 2.0.0.

Time complexity: O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.

Computes the union of numkeys sorted sets given by the specified keys, and stores the result in destination. It is mandatory to provide the number of input keys (numkeys) before passing the input keys and the other (optional) arguments.

By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists.

Using the WEIGHTS option, it is possible to specify a multiplication factor for each input sorted set. This means that the score of every element in every input sorted set is multiplied by this factor before being passed to the aggregation function. When WEIGHTS is not given, the multiplication factors default to 1.

With the AGGREGATE option, it is possible to specify how the results of the union are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.

If destination already exists, it is overwritten.

Return value

Integer reply: the number of elements in the resulting sorted set at destination.

Examples

redis>  ZADD zset1 1 “one”

(integer) 1

redis>  ZADD zset1 2 “two”

(integer) 1

redis>  ZADD zset2 1 “one”

(integer) 1

redis>  ZADD zset2 2 “two”

(integer) 1

redis>  ZADD zset2 3 “three”

(integer) 1

redis>  ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3

(integer) 3

redis>  ZRANGE out 0 -1 WITHSCORES

1) "one"
2) "5"
3) "three"
4) "9"
5) "two"
6) "10"

>ZSCAN key cursor [MATCH pattern] [COUNT count]

Incrementally iterate sorted sets elements and associated scores

Available since 2.8.0.

Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..

See SCAN for ZSCAN documentation.

humboldt Written by:

humboldt 的趣味程序园