API module#

Added in version 1.0.0.

The module provides HTTP RESTful interface for accessing in JSON format basic information about a web server instance, as well as metrics of client connections, shared memory zones, DNS queries, HTTP requests, HTTP responses cache, TCP/UDP sessions of stream_core module, and zones of http limit_conn, stream limit_conn, limit_req and http upstream modules.

The API has a dynamic configuration that allows updating the settings without reloading the configuration or restarting Angie PRO itself; currently, it enables configuring individual peers in an upstream.

Directives#

api#

Syntax:

api path;

Default:

Context:

location

Enables HTTP RESTful interface in location.

path parameter is mandatory and works similar to the alias directive, but operates over API tree, rather than filesystem hierarchy.

When specified in a prefix location:

location /stats/ {
    api /status/http/server_zones/;
}

the part of request URI matching particular prefix location /stats/ will be replaced by the path /status/http/server_zones/ in the directive parameter. For example, on a request of /stats/foo/ the /status/http/server_zones/foo/ API element will be accessed.

Also it’s possible to use variables: api /status/$module/server_zones/$name/ and the directive can also be specified inside a regexp location:

location ~^/api/([^/]+)/(.*)$ {
    api /status/http/$1_zones/$2;
}

here, similar to alias, parameter defines the whole path to the API element. E.g., from /api/location/bar/data/ the following positional variables will be populated:

$1 = "location"
$2 = "bar/data/"

which results after interpolation in /status/http/location_zones/bar/data API request.

You can decouple the dynamic configuration API from the immutable metrics API that reflects current state:

location /config/ {
    api /config/;
}

location /status/ {
    api /status/;
}

Overall, it serves for precise configuration of API access rights, e.g.:

location /status/ {
    api /status/;

    allow 127.0.0.1;
    deny  all;
}

Or:

location /blog/requests/ {
    api /status/http/server_zones/blog/requests/;

    auth_basic           "blog";
    auth_basic_user_file conf/htpasswd;
}

api_config_files#

Syntax:

api_config_files on | off;

Default:

off

Context:

location

Enables or disables the config_files object, which enumerates the contents of all Angie PRO config files that are currently loaded by the server instance, in the /status/angie/ API section. For example, with this configuration:

location /status/ {
    api /status/;
    api_config_files on;
}

A query to /status/angie/ returns approximately this:

{
    "version":"1.5.0",
    "address":"192.168.16.5",
    "generation":1,
    "load_time":"2024-03-27T12:58:39.789Z",
    "config_files": {
        "/etc/angie/angie.conf": "...",
        "/etc/angie/mime.types": "..."
    }
}

By default, the object is disabled because the config files can contain extra sensitive, confidential details.

Metrics#

Angie PRO exposes usage metrics in the /status/ section of the API; you can make it accessible by defining a respective location. Full access:

location /status/ {
    api /status/;
}

Subtree access, already discussed earlier:

location /stats/ {
    api /status/http/server_zones/;
}

Example configuration#

Configured with location /status/, resolver, http upstream, http server, location, cache, limit_conn in http and limit_req zones:

http {

    resolver 127.0.0.53 status_zone=resolver_zone;
    proxy_cache_path /var/cache/angie/cache keys_zone=cache_zone:2m;
    limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;
    limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;

    upstream upstream {
        zone upstream 256k;
        server backend.example.com service=_example._tcp resolve max_conns=5;
       keepalive 4;
    }

    server {
        server_name www.example.com;
        listen 443 ssl;

        status_zone http_server_zone;
        proxy_cache cache_zone;

        access_log /var/log/access.log main;

        location / {
            root /usr/share/angie/html;
            status_zone location_zone;
            limit_conn limit_conn_zone 1;
            limit_req zone=limit_req_zone burst=5;
        }
        location /status/ {
            api /status/;

            allow 127.0.0.1;
            deny all;
        }

    }
}

Responds to curl https://www.example.com/status/ with the following JSON:

JSON tree
{
    "angie": {
        "version":"1.5.0",
        "address":"192.168.16.5",
        "generation":1,
        "load_time":"2024-03-27T12:58:39.789Z"
    },

    "connections": {
        "accepted":2257,
        "dropped":0,
        "active":3,
        "idle":1
    },

    "slabs": {
        "cache_zone": {
            "pages": {
                "used":2,
                "free":506
            },

            "slots": {
                "64": {
                    "used":1,
                    "free":63,
                    "reqs":1,
                    "fails":0
                },

                "512": {
                    "used":1,
                    "free":7,
                    "reqs":1,
                    "fails":0
                }
            }
        },

        "limit_conn_zone": {
            "pages": {
                "used":2,
                "free":2542
            },

            "slots": {
                "64": {
                    "used":1,
                    "free":63,
                    "reqs":74,
                    "fails":0
                },

                "128": {
                    "used":1,
                    "free":31,
                    "reqs":1,
                    "fails":0
                }
            }
        },

        "limit_req_zone": {
            "pages": {
                "used":2,
                "free":2542
            },

            "slots": {
                "64": {
                    "used":1,
                    "free":63,
                    "reqs":1,
                    "fails":0
                },

                "128": {
                    "used":2,
                    "free":30,
                    "reqs":3,
                    "fails":0
                }
            }
        }
    },

    "http": {
        "server_zones": {
            "http_server_zone": {
                "ssl": {
                    "handshaked":4174,
                    "reuses":0,
                    "timedout":0,
                    "failed":0
                },

                "requests": {
                    "total":4327,
                    "processing":0,
                    "discarded":8
                },

                "responses": {
                    "200":4305,
                    "302":12,
                    "404":4
                },

                "data": {
                    "received":733955,
                    "sent":59207757
                }
            }
        },

        "location_zones": {
            "location_zone": {
                "requests": {
                    "total":4158,
                    "discarded":0
                },

                "responses": {
                    "200":4157,
                    "304":1
                },

                "data": {
                    "received":538200,
                    "sent":177606236
                }
            }
        },
        "caches": {
            "cache_zone": {
                "size":0,
                "cold":false,
                "hit": {
                    "responses":0,
                    "bytes":0
                },

                "stale": {
                    "responses":0,
                    "bytes":0
                },

                "updating": {
                    "responses":0,
                    "bytes":0
                },

                "revalidated": {
                    "responses":0,
                    "bytes":0
                },

                "miss": {
                    "responses":0,
                    "bytes":0,
                    "responses_written":0,
                    "bytes_written":0
                },

                "expired": {
                    "responses":0,
                    "bytes":0,
                    "responses_written":0,
                    "bytes_written":0
                },

                "bypass": {
                    "responses":0,
                    "bytes":0,
                    "responses_written":0,
                    "bytes_written":0
                }
            }
        },

        "limit_conns": {
            "limit_conn_zone": {
                "passed":73,
                "skipped":0,
                "rejected":0,
                "exhausted":0
            }
        },

        "limit_reqs": {
            "limit_req_zone": {
                "passed":54816,
                "skipped":0,
                "delayed":65,
                "rejected":26,
                "exhausted":0
            }
        },

        "upstreams": {
            "upstream": {
                "peers": {
                    "192.168.16.4:80": {
                        "server":"backend.example.com",
                        "service":"_example._tcp",
                        "backup":false,
                        "weight":5,
                        "state":"up",
                        "selected": {
                            "current":2,
                            "total":232
                        },

                        "max_conns":5,
                        "responses": {
                            "200":222,
                            "302":12
                        },

                        "data": {
                            "sent":543866,
                            "received":27349934
                        },

                        "health": {
                            "fails":0,
                            "unavailable":0,
                            "downtime":0
                        },

                        "sid":"<server_id>"
                    }
                },

                "keepalive":2
            }
        }
    },

    "resolvers": {
        "resolver_zone": {
            "queries": {
                "name":442,
                "srv":2,
                "addr":0
            },

            "responses": {
                "success":440,
                "timedout":1,
                "format_error":0,
                "server_failure":1,
                "not_found":1,
                "unimplemented":0,
                "refused":1,
                "other":0
            }
        }
    }
}

Each JSON branch can be requested separately with the request constructed accordingly, e.g.:

$ curl https://www.example.com/status/angie
$ curl https://www.example.com/status/connections
$ curl https://www.example.com/status/slabs
$ curl https://www.example.com/status/slabs/<zone>/slots
$ curl https://www.example.com/status/slabs/<zone>/slots/64
$ curl https://www.example.com/status/http/
$ curl https://www.example.com/status/http/server_zones
$ curl https://www.example.com/status/http/server_zones/<http_server_zone>
$ curl https://www.example.com/status/http/server_zones/<http_server_zone>/ssl

Note

By default, the module uses ISO 8601 strings for date values; to use the integer epoch format instead, add the date=epoch parameter to the query string:

$ curl https://www.example.com/status/angie/load_time

  "2024-04-01T00:59:59+01:00"

$ curl https://www.example.com/status/angie/load_time?date=epoch

  1711929599

Server status#

/status/angie#

{
    "version": "1.5.0",
    "address": "192.168.16.5",
    "generation": 1,
    "load_time": "2024-03-27T16:15:43.805Z"
    "config_files": {
        "/etc/angie/angie.conf": "...",
        "/etc/angie/mime.types": "..."
    }
}

version

String; version of the running Angie PRO web server

build

String; particular build name when it specified during compilation

address

String; the address of the server that accepted API request

generation

Number; total number of configuration reloads since last start

load_time

String or number; time of the last configuration reload, formatted as a date; strings have millisecond resolution

config_files

Object; its members are absolute pathnames of all Angie PRO configuration files that are currently loaded by the server instance, and their values are string representations of the files’ contents, for example:

{
    "/etc/angie/angie.conf": "server {\n  listen 80;\n  # ...\n\n}\n"
}

Caution

The config_files object is available in /status/angie/ only if the api_config_files directive is enabled.

Connections global metrics#

/status/connections#

{
  "accepted": 2257,
  "dropped": 0,
  "active": 3,
  "idle": 1
}

accepted

Number; the total number of accepted client connections

dropped

Number; the total number of dropped client connections

active

Number; the current number of active client connections

idle

Number; the current number of idle client connections

Slab allocator metrics of shared memory zones#

Usage statistics of configured shared memory zones, such as: limit_conn, limit_req, HTTP cache

limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;
limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;
proxy_cache cache_zone;

/status/slabs/<zone>#

where <zone> is the name of any configured status_zone shared memory zone that uses slab allocator

{
  "pages": {
    "used": 2,
    "free": 506
  },

  "slots": {
    "64": {
      "used": 1,
      "free": 63,
      "reqs": 1,
      "fails": 0
  }
}

pages

Object; memory pages statistics

    used

Number; the number of currently used memory pages

    free

Number; the number of currently free memory pages

slots

Object; memory slots statistics for each slot size. The slots object contains fields for requested memory slot sizes (e.g. 8, 16, 32, etc., up to half of the page size)

    used

Number; the number of currently used memory slots of specified size

    free

Number; the number of currently free memory slots of specified size

    reqs

Number; the total number of attempts to allocate memory slot of specified size

    fails

Number; the number of unsuccessful attempts to allocate memory slot of specified size

Resolver#

The statistics collection is enabled by specifying the zone name in the status_zone=<name> (status_zone) parameter of the resolver directive.

resolver 127.0.0.53 status_zone=resolver_zone;

/status/resolvers/<zone>#

where <zone> is the name of any shared memory zone that collects resolver statistics

{
  "queries": {
    "name": 442,
    "srv": 2,
    "addr": 0
  },

  "responses": {
    "success": 440,
    "timedout": 1,
    "format_error": 0,
    "server_failure": 1,
    "not_found": 1,
    "unimplemented": 0,
    "refused": 1,
    "other": 0
  }
}

queries

Object; queries statistics

    name

Number; the number of queries to resolve names to addresses (A and AAAA types queries)

    srv

Number; the number of queries to resolve services to addresses (SRV type queries)

    addr

Number; the number of queries to resolve addresses to names (PTR type queries)

responses

Object; responses statistics

    success

Number; the number of successful responses

    timedout

Number; the number of timed out queries

    format_error

Number; the number responses with code 1 (Format error)

    server_failure

Number; the number responses with code 2 (Server failure)

    not_found

Number; the number responses with code 3 (Name Error)

    unimplemented

Number; the number responses with code 4 (Not Implemented)

    refused

Number; the number responses with code 5 (Refused)

    other

Number; the number of queries completed with other non-zero code

The response codes are described in RFC 1035, section 4.1.1.

HTTP server and location#

The status_zone <zone> (status_zone) directive specified in the server context enables the collection of statistics in the specified server zone. Object ssl is filled up with data when server is configured with listen ssl;

server {
    ...
    status_zone http_server_zone;

/status/http/server_zones/<zone>#

where <zone> is the name of any shared memory zone that collects server statistics

"ssl": {
  "handshaked": 4174,
  "reuses": 0,
  "timedout": 0,
  "failed": 0
},

"requests": {
  "total": 4327,
  "processing": 0,
  "discarded": 0
},

"responses": {
  "200": 4305,
  "302": 6,
  "304": 12,
  "404": 4
},

"data": {
  "received": 733955,
  "sent": 59207757
}

ssl

Object; SSL statistics

    handshaked

Number; the total number of successful SSL handshakes

    reuses

Number; the total number of session reuses during SSL handshake

    timedout

Number; the total number of timed out SSL handshakes

    failed

Number; the total number of failed SSL handshakes

requests

Object; requests statistics

    total

Number; the total number of client requests

    processing

Number; the number of currently being processed client requests

    discarded

Number; the total number of client requests completed without sending a response

responses

Object; responses statistics

    <code>

Number; a non-zero number of responses with status <code> (100-599)

    xxx

Number; a non-zero number of responses with other status codes

data

Object; data statistics

    received

Number; the total number of bytes received from clients

    sent

Number; the total number of bytes sent to clients

The status_zone <zone> (status_zone) directive, specified in location and if in location contexts enables the collection of statistics in the specified location zone. The special value off disables statistics collection in nested location blocks. location branch never includes ssl object and requests/processing metric

location / {
    root /usr/share/angie/html;
    status_zone location_zone;
}

/status/http/location_zones/<zone>#

{
  "requests": {
    "total": 4158,
    "discarded": 0
  },

  "responses": {
    "200": 4157,
    "304": 1
  },

  "data": {
    "received": 538200,
    "sent": 177606236
  }
}

Stream server#

The status_zone <zone> (status_zone) directive specified in the server context enables the collection of statistics in the specified server zone. Object ssl is filled up with data when server is configured with listen ssl;

server {
    ...
    status_zone stream_server_zone;

/status/stream/server_zones/<zone>#

where <zone> is the name of any shared memory zone that collects server statistics

{
  "ssl": {
    "handshaked": 24,
    "reuses": 0,
    "timedout": 0,
    "failed": 0
  },

  "connections": {
    "total": 24,
    "processing": 1,
    "discarded": 0
  },

  "sessions": {
    "success": 24,
    "invalid": 0,
    "forbidden": 0,
    "internal_error": 0,
    "bad_gateway": 0,
    "service_unavailable": 0
  },

  "data": {
    "received": 2762947,
    "sent": 53495723
  }
}

ssl

Object; SSL statistics

    handshaked

Number; the total number of successful SSL handshakes

    reuses

Number; the total number of session reuses during SSL handshake

    timedout

Number; the total number of timed out SSL handshakes

    failed

Number; the total number of failed SSL handshakes

connections

Object; connections statistics

    total

Number; the total number of client connections

    processing

Number; the number of currently being processed client connections

    discarded

Number: the total number of client connections completed without creating a session

sessions

Object; sessions statistics

    success

Number; the number of sessions completed with code 200, which means successful completion

    invalid

Number; the number of sessions completed with code 400, which happens when client data could not be parsed, e.g. the PROXY protocol header

    forbidden

Number; the number of sessions completed with code 403, when access was forbidden, for example, when access is limited for certain client addresses

    internal_error

Number; the number of sessions completed with code 500, the internal server error

    bad_gateway

Number; the number of sessions completed with code 502, bad gateway, for example, if an upstream server could not be selected or reached

    service_unavailable

Number; the number of sessions completed with code 503, service unavailable, for example, when access is limited by the number of connections

data

Object; data statistics

    received

Number; the total number of bytes received from clients

    sent

Number; the total number of bytes sent to clients

HTTP caches#

proxy_cache cache_zone;

/status/http/caches/<cache>#

For each zone configured with proxy_cache, the following data is stored:

{
  "name_zone": {
    "size": 0,
    "cold": false,
    "hit": {
      "responses": 0,
      "bytes": 0
    },

    "stale": {
      "responses": 0,
      "bytes": 0
    },

    "updating": {
      "responses": 0,
      "bytes": 0
    },

    "revalidated": {
      "responses": 0,
      "bytes": 0
    },

    "miss": {
      "responses": 0,
      "bytes": 0,
      "responses_written": 0,
      "bytes_written": 0
    },

    "expired": {
      "responses": 0,
      "bytes": 0,
      "responses_written": 0,
      "bytes_written": 0
    },

    "bypass": {
      "responses": 0,
      "bytes": 0,
      "responses_written": 0,
      "bytes_written": 0
    }
  }
}

size

Number; the current size of the cache

max_size

Number; configured limit on the maximum size of the cache

cold

Boolean; true while the cache loader loads data from disk

hit

Object; statistics of valid cached responses (proxy_cache_valid)

    responses

Number; the total number of responses read from the cache

    bytes

Number; the total number of bytes read from the cache

stale

Object; statistics of expired responses taken from the cache (proxy_cache_use_stale)

    responses

Number; the total number of responses read from the cache

    bytes

Number; the total number of bytes read from the cache

updating

Object; statistics of expired responses taken from the cache while responses were being updated (proxy_cache_use_stale updating)

    responses

Number; the total number of responses read from the cache

    bytes

Number; the total number of bytes read from the cache

revalidated

Object; statistics of expired and revalidated responses taken from the cache (proxy_cache_revalidate)

    responses

Number; the total number of responses read from the cache

    bytes

Number; the total number of bytes read from the cache

miss

Object; statistics of responses not found in the cache

    responses

Number; the total number of corresponding responses

    bytes

Number; the total number of bytes read from the proxied server

    responses_written

Number; the total number of responses written to the cache

    bytes_written

Number; the total number of bytes written to the cache

expired

Object; statistics of expired responses not taken from the cache

    responses

Number; the total number of corresponding responses

    bytes

Number; the total number of bytes read from the proxied server

    responses_written

Number; the total number of responses written to the cache

    bytes_written

Number; the total number of bytes written to the cache

bypass

Object; statistics of responses not looked up in the cache (proxy_cache_bypass)

    responses

Number; the total number of corresponding responses

    bytes

Number; the total number of bytes read from the proxied server

    responses_written

Number; the total number of responses written to the cache

    bytes_written

Number; the total number of bytes written to the cache

Added in version 1.2.0.

If cache sharding is enabled with proxy_cache_path directives, individual shards are exposed as object members of a shards object:

shards

Object; lists individual shards as members

    <shard>

Object; represents an individual shard with its cache path for name

        sizes

Number; the shard’s current size

        max_size

Number; maximum shard size, if configured

        cold

Boolean; true while the cache loader loads data from disk

{
  "name_zone": {
    "shards": {
        "/path/to/shard1": {
            "size": 0,
            "cold": false
        },

        "/path/to/shard2": {
            "size": 0,
            "cold": false
        }
    }
}

limit_conn#

limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;

/status/http/limit_conns/<zone>, /status/stream/limit_conns/<zone>#

Objects for each configured limit_conn in http or limit_conn in stream contexts with the following fields

{
  "passed": 73,
  "skipped": 0,
  "rejected": 0,
  "exhausted": 0
}

passed

Number; the total number of passed connections

skipped

Number; the total number of connections passed with zero-length key, or key exceeding 255 bytes

rejected

Number; the total number of connections exceeding the configured limit

exhausted

Number; the total number of connections rejected due to exhaustion of zone storage

limit_req#

limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;

/status/http/limit_reqs/<zone>#

Objects for each configured limit_req with the following fields

{
  "passed": 54816,
  "skipped": 0,
  "delayed": 65,
  "rejected": 26,
  "exhausted": 0
}

passed

Number; the total number of passed connections

skipped

Number; the total number of requests passed with zero-length key, or key exceeding 65535 bytes

delayed

Number; the total number of delayed requests

rejected

Number; the total number of rejected requests

exhausted

Number; the total number of requests rejected due to exhaustion of zone storage

HTTP upstream#

Added in version 1.1.0.

To enable collection of the following metrics, set the zone directive in the upstream context, for instance:

upstream upstream {
    zone upstream 256k;
    server backend.example.com service=_example._tcp resolve max_conns=5;
    keepalive 4;
}

/status/http/upstreams/<upstream>#

where <upstream> is the name of any upstream specified with the zone directive

{
    "peers": {
        "192.168.16.4:80": {
            "server": "backend.example.com",
            "service": "_example._tcp",
            "backup": false,
            "weight": 5,
            "state": "up",
            "selected": {
                "current": 2,
                "total": 232
            },

            "max_conns": 5,
            "responses": {
                "200": 222,
                "302": 12
            },

            "data": {
                "sent": 543866,
                "received": 27349934
            },

            "health": {
                "fails": 0,
                "unavailable": 0,
                "downtime": 0
            },

            "sid": "<server_id>"
        }
    },

    "keepalive": 2
}

peers

Object; contains the metrics of the upstream’s peers as subobjects whose names are canonical representations of the peers’ addresses. Members of each subobject:

    server

String; the parameter of the server directive

    service

String; name of service as it’s specified in server directie, if configured

    slow_start
    (PRO 1.4.0+)

Number; the specified slow_start value for the server, expressed in seconds.

When setting the value via the respective subsection of the dynamic configuration API, you can specify either a number or a time value with millisecond precision.

    backup

Boolean; true for backup servers

    weight

Number; configured weight

    state

String; current state of the peer:

  • checking (PRO): set to essential, being checked now, only probe requests are sent

  • down: disabled manually, no requests are sent

  • draining (PRO): similar to down, but requests from sessions that were earlier bound using sticky are still sent

  • recovering: recovering after failure according to slow_start, more requests are sent gradually

  • unavailable: reached the max_fails limit, a client request is attempted at fail_timeout intervals

  • unhealthy (PRO): not functioning properly, only probe requests are sent

  • up: operational, requests are sent as usual

    selected

Object; peer selection statistics

        current

Number; the current number of connections to peer

        total

Number; total number of requests forwarded to peer

        last

String or number; time when peer was last selected, formatted as a date

    max_conns

Number; the configured maximum number of simultaneous connections, if specified

    responses

Object; responses statistics

        <code>

Number; a non-zero number of responses with status <code> (100-599)

        xxx

Number; a non-zero number of responses with other status codes

    data

Object; data statistics

        received

Number; the total number of bytes received from peer

        sent

Number; the total number of bytes sent to peer

    health

Object; health statistics

        fails

Number; the total number of unsuccessful attempts to communicate with the peer

        unavailable

Number; how many times peer became unavailable due to reaching the max_fails limit

        downtime

Number; the total time (in milliseconds) when peer was unavailable for selection

        downstart

String or number; time when peer became unavailable, formatted as a date

        header_time
        (PRO 1.3.0+)

Number; average time (in milliseconds) to receive the response headers from the peer; see response_time_factor

        response_time
        (PRO 1.3.0+)

Number; average time (in milliseconds) to receive the entire peer response; see response_time_factor

    sid

String; configured id of the server in upstream group

keepalive

Number; the number of currently cached connections

Changed in version 1.2.0.

If the upstream has upstream_probe probes configured, the health object also has a probes subobject that stores the peer’s health probe counters, while the peer’s state can also be checking and unhealthy, apart from the values listed in the table above:

{
    "192.168.16.4:80": {
        "state": "unhealthy",
        "...": "...",
        "health": {
            "...": "...",
            "probes": {
                "count": 10,
                "fails": 10,
                "last": "2024-03-27T09:56:07Z"
            }
        }
    }
}

The checking value of state isn’t counted as downtime and means that the peer, which has a probe configured as essential, hasn’t been checked yet; the unhealthy value means that the peer is malfunctioning. Both states also imply that the peer isn’t included in load balancing. For details of health probes, see upstream_probe.

Counters in probes:

count

Number; total probes for this peer

fails

Number; total failed probes

last

String or number; last probe time, formatted as a date

queue#

Changed in version 1.4.0.

If a request queue is configured for the upstream, the upstream object also contains a nested queue object, which holds counters for requests in the queue:

{
    "queue": {
        "queued": 20112,
        "waiting": 1011,
        "dropped": 6031,
        "timedout": 560,
        "overflows": 13
    }
}

The counter values are aggregated across all worker processes:

queued

Number; total count of requests that entered the queue

waiting

Number; current count of requests in the queue

dropped

Number; total count of requests removed from the queue due to the client prematurely closing the connection

timedout

Number; total count of requests removed from the queue due to timeout

overflows

Number; total count of queue overflow occurrences

Stream upstream#

To enable collection of the following metrics, set the zone directive in the upstream context, for instance:

upstream upstream {
    zone upstream 256k;
    server backend.example.com service=_example._tcp resolve max_conns=5;
    keepalive 4;
}

/status/stream/upstreams/<upstream>#

Here, <upstream> is the name of an upstream that is configured with a zone directive.

{
    "peers": {
        "192.168.16.4:1935": {
            "server": "backend.example.com",
            "service": "_example._tcp",
            "backup": false,
            "weight": 5,
            "state": "up",
            "selected": {
                "current": 2,
                "total": 232
            },

            "max_conns": 5,
            "data": {
                "sent": 543866,
                "received": 27349934
            },

            "health": {
                "fails": 0,
                "unavailable": 0,
                "downtime": 0
            }
        }
    }
}

peers

Object; contains the metrics of the upstream’s peers as subobjects whose names are canonical representations of the peers’ addresses. Members of each subobject:

    server

String; address set by the server directive

    service

String; service name, if set by server directive

    slow_start
    (PRO 1.4.0+)

Number; the specified slow_start value for the server, expressed in seconds.

When setting the value via the respective subsection of the dynamic configuration API, you can specify either a number or a time value with millisecond precision.

    backup

Boolean; true for backup server

    weight

Number; the weight of the peer

    state

String; current state of the peer:

  • up: operational, requests are sent as usual

  • down: disabled manually, no requests are sent

  • unavailable: reached the max_fails limit, a client request is attempted at fail_timeout intervals

  • recovering: recovering after failure according to slow_start, more requests are sent gradually

  • checking (PRO): set to essential, being checked now, only probe requests are sent

  • unhealthy (PRO): not functioning properly, only probe requests are sent

    selected

Object; the peer’s selection metrics

        current

Number; current connections to the peer

        total

Number; total connections forwarded to the peer

        last

String or number; time when the peer was last selected, formatted as a date

    max_conns

Number; maximum number of simultaneous active connections to the peer, if set

    data

Object; data transfer metrics

        received

Number; total bytes received from the peer

        sent

Number; total bytes sent to the peer

    health

Object; peer health metrics

        fails

Number; total failed attempts to reach the peer

        unavailable

Number; times the peer became unavailable due to reaching the max_fails

        downtime

Number; total time (in milliseconds) that the peer was unavailable for selection

        downstart

String or number; time when the peer last became unavailable, formatted as a date

        connect_time
        (PRO 1.4.0+)

Number; average time (in milliseconds) taken to establish a connection with the peer; see the response_time_factor directive.

        first_byte_time
        (PRO 1.4.0+)

Number; average time (in milliseconds) to receive the first byte of the response from the peer; see the response_time_factor directive.

        last_byte_time
        (PRO 1.4.0+)

Number; average time (in milliseconds) to receive the complete response from the peer; see the response_time_factor directive.

Changed in version 1.4.0.

If the upstream has upstream_probe probes configured, the health object also has a probes subobject that stores the peer’s health probe counters, while the peer’s state can also be checking and unhealthy, apart from the values listed in the table above:

{
    "192.168.16.4:80": {
        "state": "unhealthy",
        "...": "...",
        "health": {
            "...": "...",
            "probes": {
                "count": 2,
                "fails": 2,
                "last": "2024-03-27T11:03:54Z"
            }
        }
    }
}

The checking value of state isn’t counted as downtime and means that the peer, which has a probe configured as essential, hasn’t been checked yet; the unhealthy value means that the peer is malfunctioning. Both states also imply that the peer isn’t included in load balancing. For details of health probes, see upstream_probe.

Counters in probes:

count

Number; total probes for this peer

fails

Number; total failed probes

last

String or number; last probe time, formatted as a date

Prometheus Format#

Added in version 1.1.0.

Deprecated since version 1.4.0.

Prometheus-readable response can be obtained by adding the format=prometheus parameter to the query string:

$ curl https://www.example.com/status/angie?format=prometheus
generation 1
$ curl http://www.example.com/status/http/server_zones/<http_server_zone>?format=prometheus
ssl_handshaked 21
ssl_reuses 0
ssl_timedout 0
ssl_failed 0
requests_total 46
requests_processing 1
requests_discarded 0
responses_200 44
responses_404 1
data_received 8634
data_sent 584725

See also: the http_prometheus module.

Dynamic Configuration API (PRO only)#

Added in version 1.2.0.

The API includes a /config section that enables dynamic updates to Angie PRO’s configuration in JSON with PUT, PATCH, and DELETE HTTP requests. All updates are atomic; new settings are applied as a whole, or none are applied at all. On error, Angie PRO reports the reason.

Subsections of /config#

Currently, configuration of individual servers within upstreams is available in the /config section for the HTTP and stream <api_config_stream_upstreams_servers> modules; the number of settings eligible for dynamic configuration is steadily increasing.

/config/http/upstreams/<upstream>/servers/<name>#

Enables configuring individual upstream peers, including deleting existing peers or adding new ones.

URI path parameters:

<upstream>

Name of the upstream; to be configurable via /config, it must have a zone directive configured, defining a shared memory zone.

<name>

The peer’s name within the upstream, defined as <service>@<host>, where:

  • <service>@ is an optional service name, used for SRV record resolution.

  • <host> is the domain name of the service (if resolve is present) or its IP; an optional port can be defined here.

For example, the following configuration:

upstream backend {
    server backend.example.com:8080 service=_http._tcp resolve;
    server 127.0.0.1;
    zone backend 1m;
}

Allows the following peer names:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers/_http._tcp@backend.example.com:8080/
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/127.0.0.1/

This API subsection enables setting the weight, max_conns, max_fails, fail_timeout, backup, down and sid parameters, as described in server:

Note

There is no separate drain option here; to enable drain, set down to the string value drain:

$ curl -X PUT -d "drain" \
  http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/down

Example:

curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com?defaults=on
{
    "weight": 1,
    "max_conns": 0,
    "max_fails": 1,
    "fail_timeout": 10,
    "backup": true,
    "down": false,
    "sid": ""
}

Actually available parameters are limited to the ones supported by the current load balancing method of the upstream. So, if the upstream is configured as random:

upstream backend {
    zone backend 256k;
    server backend.example.com resolve max_conns=5;
    random;
}

You will be unable to add a new peer that defines backup:

$ curl -X PUT -d '{ "backup": true }' \
    http://127.0.0.1/config/http/upstreams/backend/servers/backend1.example.com
{
    "error": "FormatError",
    "description": "The \"backup\" field is unknown."
}

Note

Even with a compatible load balancing method, the backup parameter can only be set at new peer creation.

/config/stream/upstreams/<upstream>/servers/<name>#

Enables configuring individual upstream peers, including deleting existing peers or adding new ones.

URI path parameters:

<upstream>

Name of the upstream; to be configurable via /config, it must have a zone directive configured, defining a shared memory zone.

<name>

The peer’s name within the upstream, defined as <service>@<host>, where:

  • <service>@ is an optional service name, used for SRV record resolution.

  • <host> is the domain name of the service (if resolve is present) or its IP; an optional port can be defined here.

For example, the following configuration:

upstream backend {
    server backend.example.com:8080 service=_example._tcp resolve;
    server 127.0.0.1:12345;
    zone backend 1m;
}

Allows the following peer names:

$ curl http://127.0.0.1/config/stream/upstreams/backend/servers/_example._tcp@backend.example.com:8080/
$ curl http://127.0.0.1/config/stream/upstreams/backend/servers/127.0.0.1:12345/

This API subsection enables setting the weight, max_conns, max_fails, fail_timeout, backup and down parameters, as described in server:

curl http://127.0.0.1/config/stream/upstreams/backend/servers/backend.example.com?defaults=on
{
    "weight": 1,
    "max_conns": 0,
    "max_fails": 1,
    "fail_timeout": 10,
    "backup": true,
    "down": false,
}

Actually available parameters are limited to the ones supported by the current load balancing method of the upstream. So, if the upstream is configured as random:

upstream backend {
    zone backend 256k;
    server backend.example.com resolve max_conns=5;
    random;
}

You will be unable to add a new peer that defines backup:

$ curl -X PUT -d '{ "backup": true }' \
    http://127.0.0.1/config/stream/upstreams/backend/servers/backend1.example.com
{
    "error": "FormatError",
    "description": "The \"backup\" field is unknown."
}

Note

Even with a compatible load balancing method, the backup parameter can only be set at new peer creation.

HTTP Methods#

Let’s consider the semantics of all HTTP methods applicable to this section, given this upstream configuration:

http {
    # ...

    upstream backend {
        zone upstream 256k;
        server backend.example.com resolve max_conns=5;
        # ...
    }

    server {
        # ...

        location /config/ {
            api /config/;

            allow 127.0.0.1;
            deny all;
        }
    }
}

GET#

The GET HTTP method queries an entity at any existing path within /config, just as it does for other API sections.

For example, the /config/http/upstreams/backend/servers/ upstream server branch enables these queries:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_conns
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
$ curl http://127.0.0.1/config/http/upstreams/backend/servers
$ # ...
$ curl http://127.0.0.1/config

You can obtain default parameter values with defaults=on:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers?defaults=on
{
    "backend.example.com": {
        "weight": 1,
        "max_conns": 5,
        "max_fails": 1,
        "fail_timeout": 10,
        "backup": false,
        "down": false,
        "sid": ""
    }
}

PUT#

The PUT HTTP method creates a new JSON entity at the specified path or entirely replaces an existing one.

For example, to set the max_fails parameter, not specified earlier, of the backend.example.com server within the backend upstream:

$ curl -X PUT -d '2' \
    http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_fails
{
    "success": "Updated",
    "description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com/max_fails\" was updated with replacing."
}

Verify the changes:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
    "max_conns": 5,
    "max_fails": 2
}

DELETE#

The DELETE HTTP method deletes previously defined settings at the specified path; at doing that, it returns to the default values if there are any.

For example, to delete the previously set max_fails parameter of the backend.example.com server within the backend upstream:

$ curl -X DELETE \
    http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_fails
{
    "success": "Reset",
    "description": "Configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com/max_fails\" was reset to default."
}

Verify the changes using defaults=on:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com?defaults=on
{
    "weight": 1,
    "max_conns": 5,
    "max_fails": 1,
    "fail_timeout": 10,
    "backup": false,
    "down": false,
    "sid": ""
}

The max_fails setting is back to its default value.

PATCH#

The PATCH HTTP method creates a new entity at the specified path or partially replaces or complements an existing one (RFC 7386) by supplying a JSON definition in its payload.

The method operates as follows: if the entities from the new definition exist in the configuration, they are overwritten; otherwise, they are added.

For example, to change the down setting of the backend.example.com server within the backend upstream, leaving the rest intact:

$ curl -X PATCH -d '{ "down": true }' \
    http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
    "success": "Updated",
    "description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com\" was updated with merging."
}

Verify the changes:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
    "max_conns": 5,
    "down": true
}

The JSON object supplied with the PATCH request was merged with the existing one instead of overwriting it, as would be the case with PUT.

The null values are a corner case; they are used to delete specific configuration items during such merge.

Note

This deletion is identical to DELETE; in particular, it reinstates the default values.

For example, to delete the down setting added earlier and simultaneously update max_conns:

$ curl -X PATCH -d '{ "down": null, "max_conns": 6 }' \
    http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
    "success": "Updated",
    "description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com\" was updated with merging."
}

Verify the changes:

$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
    "max_conns": 6
}

The down parameter, for which a null was supplied, was deleted; max_conns was updated.