Improving the Performance on the UMH Historian (Grafana / TimescaleDB) in 0.14.1

Optimize your Grafana and TimescaleDB performance with UMH Historian in version 0.14.1. This article goes into our journey of query optimization, detailing the challenges and solutions that have significantly sped up dashboard loading times for our users.

Improving the Performance on the UMH Historian (Grafana / TimescaleDB) in 0.14.1
Loading our dashboards takes ages !

This is what came more and more often from our community members and customers.

We went on an optimization journey with the UMH Historian, specifically focusing on improving query performance within our Grafana and TimescaleDB setup. This article outlines the challenges we faced, the solutions we implemented, and how you can benefit by upgrading to the latest version or applying these improvements yourself.

The Original Problem: Very Long Query Times

Community and customer feedback pointed out severe lag in loading dashboards. Our initial assessments ruled out system performance issues, pinpointing the problem to inefficient SQL queries. An example of such a query from the Management Console's Tag Browser is shown below:

SELECT name, avg(value) as value, time_bucket('2h', timestamp) AS time
FROM tag
WHERE asset_id = get_asset_id(
	'many',
	'data',
	'points',
	'2'
)
AND name = 'value'
AND timestamp BETWEEN '2024-03-05T00:00:00.000Z' AND '2024-06-03T00:00:00.000Z'
GROUP BY time, name
ORDER BY time DESC;

Identifying the Root Cause: The get_asset_id Function

We utilized EXPLAIN ANALYZE on a test system to diagnose the inefficiencies:

GroupAggregate  (cost=115466.66..122097.76 rows=248318 width=22) (actual time=4192.483..4416.408 rows=1079 loops=1)
"  Group Key: (time_bucket('02:00:00'::interval, tag.""timestamp"")), tag.name"
  ->  Sort  (cost=115466.66..116193.24 rows=290633 width=18) (actual time=4192.363..4266.927 rows=302484 loops=1)
"        Sort Key: (time_bucket('02:00:00'::interval, tag.""timestamp"")) DESC"
        Sort Method: external merge  Disk: 8896kB
        ->  Custom Scan (ChunkAppend) on tag  (cost=0.00..85602.25 rows=290633 width=18) (actual time=304.909..3944.526 rows=302484 loops=1)
              Chunks excluded during startup: 0
              ->  Seq Scan on _hyper_1_13_chunk  (cost=0.00..3637.51 rows=13113 width=18) (actual time=304.903..447.338 rows=13113 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
"              ->  Index Scan using ""1_2_tag_name_asset_id_timestamp_key"" on _hyper_1_1_chunk  (cost=0.42..7490.60 rows=11843 width=30) (actual time=0.101..244.903 rows=23690 loops=1)"
"                    Index Cond: ((name = 'value'::text) AND (""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
"                    Filter: (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text))"
              ->  Seq Scan on _hyper_1_12_chunk  (cost=0.00..6543.84 rows=23592 width=18) (actual time=0.055..240.542 rows=23592 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_11_chunk  (cost=0.00..6480.01 rows=23363 width=18) (actual time=0.055..227.191 rows=23363 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_6_chunk  (cost=0.00..6540.60 rows=23580 width=18) (actual time=0.054..520.408 rows=23580 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_3_chunk  (cost=0.00..6556.26 rows=23638 width=18) (actual time=0.056..324.077 rows=23638 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_7_chunk  (cost=0.00..6592.36 rows=23768 width=18) (actual time=0.045..264.391 rows=23768 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_4_chunk  (cost=0.00..6537.09 rows=23567 width=18) (actual time=0.043..268.158 rows=23567 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_14_chunk  (cost=0.00..6515.30 rows=23490 width=18) (actual time=0.046..181.634 rows=23490 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_10_chunk  (cost=0.00..6517.73 rows=23499 width=18) (actual time=0.042..196.960 rows=23499 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_9_chunk  (cost=0.00..6537.36 rows=23568 width=18) (actual time=0.041..350.425 rows=23568 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_8_chunk  (cost=0.00..6523.94 rows=23522 width=18) (actual time=0.041..222.187 rows=23522 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Seq Scan on _hyper_1_5_chunk  (cost=0.00..6494.51 rows=23413 width=18) (actual time=0.056..279.690 rows=23413 loops=1)
"                    Filter: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
              ->  Index Scan using _hyper_1_15_chunk_tag_timestamp_idx on _hyper_1_15_chunk  (cost=0.29..1908.56 rows=6677 width=18) (actual time=0.103..73.246 rows=6681 loops=1)
"                    Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
"                    Filter: ((name = 'value'::text) AND (asset_id = get_asset_id('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
Planning Time: 2.881 ms
JIT:
  Functions: 67
"  Options: Inlining false, Optimization false, Expressions true, Deforming true"
"  Timing: Generation 4.067 ms, Inlining 0.000 ms, Optimization 7.396 ms, Emission 297.413 ms, Total 308.876 ms"
Execution Time: 5225.781 ms

An insightful tip led us to discover that our get_asset_id function was not cached, causing significant performance overhead as it was re-evaluated for each database row.

Adjusting Function Volatility of get_asset_id

Our current (0.14.0) function implementation is available here, written using PL/pgSQL:

    CREATE OR REPLACE FUNCTION get_asset_id(
        _enterprise text,
        _site text DEFAULT '',
        _area text DEFAULT '',
        _line text DEFAULT '',
        _workcell text DEFAULT '',
        _origin_id text DEFAULT ''
    )
    RETURNS integer AS '
    DECLARE
        result_id integer;
    BEGIN
        SELECT id INTO result_id FROM asset
        WHERE enterprise = _enterprise
        AND site = _site
        AND area = _area
        AND line = _line
        AND workcell = _workcell
        AND origin_id = _origin_id
        LIMIT 1; -- Ensure only one id is returned

        RETURN result_id;
    END;
    ' LANGUAGE plpgsql;

Aside from the unusual choice of single quotes instead of $$ for the inner function definition, which is dictated by the processing requirements of Helm, nothing initially seemed amiss.

I revisited the excellent PostgreSQL documentation on creating functions and also consulted GPT-4 to see if it detected anything unusual. Both GPT-4 and the documentation confirmed that a function is marked as VOLATILE by default. This means that PostgreSQL will re-evaluate this function for every row it encounters in the tag table. Considering that our table contains hundreds of thousands of entries, this function would perform an SQL query to retrieve the same ID from the asset table every time, even though the asset table is typically static and only grows.

To address this, we introduced new versions of the function:

  • get_asset_id_stable (using the STABLE keyword)
  • get_asset_id_immutable (using the IMMUTABLE keyword)

Switching to the IMMUTABLE function version yielded a dramatic improvement:

GroupAggregate  (cost=0.29..17159.48 rows=241068 width=22) (actual time=0.168..287.419 rows=1079 loops=1)
"  Group Key: (time_bucket('02:00:00'::interval, tag.""timestamp"")), tag.name"
  ->  Custom Scan (ChunkAppend) on tag  (cost=0.29..11427.35 rows=282148 width=18) (actual time=0.036..241.234 rows=302484 loops=1)
"        Order: time_bucket('02:00:00'::interval, tag.""timestamp"") DESC"
        ->  Index Scan using _hyper_1_13_chunk_tag_timestamp_idx on _hyper_1_13_chunk  (cost=0.29..467.01 rows=13113 width=18) (actual time=0.035..22.843 rows=13113 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
"        ->  Index Scan Backward using ""1_2_tag_name_asset_id_timestamp_key"" on _hyper_1_1_chunk  (cost=0.42..818.90 rows=3358 width=30) (actual time=1.088..37.789 rows=23690 loops=1)"
"              Index Cond: ((name = 'value'::text) AND (asset_id = 6) AND (""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
        ->  Index Scan using _hyper_1_12_chunk_tag_timestamp_idx on _hyper_1_12_chunk  (cost=0.29..837.89 rows=23592 width=18) (actual time=0.121..8.603 rows=23592 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_11_chunk_tag_timestamp_idx on _hyper_1_11_chunk  (cost=0.29..829.06 rows=23363 width=18) (actual time=0.024..9.838 rows=23363 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_6_chunk_tag_timestamp_idx on _hyper_1_6_chunk  (cost=0.29..837.59 rows=23580 width=18) (actual time=0.027..7.697 rows=23580 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_3_chunk_tag_timestamp_idx on _hyper_1_3_chunk  (cost=0.29..839.04 rows=23638 width=18) (actual time=0.026..11.821 rows=23638 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_7_chunk_tag_timestamp_idx on _hyper_1_7_chunk  (cost=0.29..843.29 rows=23768 width=18) (actual time=0.029..12.717 rows=23768 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_4_chunk_tag_timestamp_idx on _hyper_1_4_chunk  (cost=0.29..837.26 rows=23567 width=18) (actual time=0.028..7.129 rows=23567 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_14_chunk_tag_timestamp_idx on _hyper_1_14_chunk  (cost=0.29..834.34 rows=23490 width=18) (actual time=0.028..7.161 rows=23490 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_10_chunk_tag_timestamp_idx on _hyper_1_10_chunk  (cost=0.29..834.56 rows=23499 width=18) (actual time=0.024..7.228 rows=23499 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_9_chunk_tag_timestamp_idx on _hyper_1_9_chunk  (cost=0.29..837.29 rows=23568 width=18) (actual time=0.025..15.615 rows=23568 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_8_chunk_tag_timestamp_idx on _hyper_1_8_chunk  (cost=0.29..835.14 rows=23522 width=18) (actual time=0.026..16.498 rows=23522 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_5_chunk_tag_timestamp_idx on _hyper_1_5_chunk  (cost=0.29..831.31 rows=23413 width=18) (actual time=0.026..23.030 rows=23413 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
        ->  Index Scan using _hyper_1_15_chunk_tag_timestamp_idx on _hyper_1_15_chunk  (cost=0.29..239.31 rows=6677 width=18) (actual time=0.026..4.365 rows=6681 loops=1)
"              Index Cond: ((""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
              Filter: ((asset_id = 6) AND (name = 'value'::text))
Planning Time: 1.669 ms
Execution Time: 287.576 ms

The execution time was notably reduced, as the function no longer re-executed with each row and the filter is now only ((asset_id = 6) AND (name = 'value'::text)) instead of executing the whole get_asset_id function.

This version assumes no modifications to the asset table entries.

If modifications are likely, the STABLE version (get_asset_id_stable) is recommended as it performs similarly by evaluating only once per index scan.

Further Optimization: Continuous Aggregates

While the aforementioned improvements are significant for most use cases, further optimization is possible by allowing the database to "pre-compute" results for specific queries.

In general, this can be achieved using materialized views in PostgreSQL. However, TimescaleDB offers us a more comfortable version suited for frequently changing data. A materialized view creates a new table that holds the results of a previously executed query, while a continuous aggregate automatically updates this table in the background as new data comes in.

The modifications to the get_asset_id function have proven very flexible; now, we need to evaluate what we actually query frequently.

In our case, we observed that for most Grafana dashboards, an average over 5-minute time buckets sufficed. (Although this can be adjusted for any bucket size, and you can even stack continuous aggregates on top of each other. Note: This feature is not yet available with UMH, as we use TimescaleDB 2.8)

Space considerations

The initial setup of the materialized view will necessitate an increase in the temp_file_limit (details below). A very safe upper limit for a modern HDD or better is approximately the same size as the data itself. Once the initial computation is complete, this limit can be reduced again (our current version of TimescaleDB defaults to 1GB).

The materialized view itself consumes only a minimal amount of data, as it saves only one data point per time bucket per asset_id/name combination.

Creating the continous aggregate

The following query creates a new materialized view without any data. The exact query might vary based on your application requirements. In our case, we generate averages over 5-minute time buckets:

💡
All SQL queries below were executed by the postgres user
CREATE MATERIALIZED VIEW tag_summary_5m
WITH (timescaledb.continuous) AS
SELECT
  asset_id,
  name,
  time_bucket('5m', timestamp) AS time,
  avg(value) AS value
FROM tag
GROUP BY asset_id, name, time
WITH NO DATA;

You can omit the WITH NO DATA option to automatically populate it with data, but we found it easier to manage the fill process later due to potential complexities with large datasets.

Filling it with data

For the next step, we initially increased the temp_file_limit as TimescaleDB requires a substantial amount of temporary disk space for the initial creation. For our database, we set this limit to 80GB, having found that lower values (such as 1GB, 2GB, 4GB, 8GB) were insufficient:

SET temp_file_limit = '80GB';

Next, we invoke refresh_continuous_aggregate to update the data initially:

Since this is the first refresh, we just set the window_start (2nd argument) to NULL and the window_end (3rd argument) to LOCALTIMESTAMP. This will read all data and add it to our materialized view.

CALL refresh_continuous_aggregate('tag_summary_5m', NULL, LOCALTIMESTAMP);

This operation may take a while, ranging from minutes to hours, depending on your dataset size.

If the function fails due to the temp_file_limit, simply increase it and retry.

Ensuring Automatic Data Addition

Lastly, we set up TimescaleDB to automatically refresh the materialized view as new data arrives using add_continous_aggregate_policy. This requires some fine-tuning based on your specific dataset:

SELECT add_continuous_aggregate_policy('tag_summary_5m',
  start_offset => INTERVAL '1 month',
  end_offset   => INTERVAL '1 hour',
  schedule_interval => INTERVAL '5 minute');

The start_offset is set to consider the oldest data entries we want to refresh, which, although perhaps overly cautious, helps avoid issues with delayed data ingestion. The end_offset determines the newest data to consider, and we've chosen data up to one hour old because it's unlikely to change thereafter. The schedule_interval is aligned with our bucket size to maintain consistency.

Executing this command ensures that TimescaleDB keeps your materialized view up-to-date without manual intervention.

Returning to our query plan, we achieve an additional substantial speed improvement:

Custom Scan (ChunkAppend) on _materialized_hypertable_6  (cost=0.54..1054.66 rows=25413 width=22) (actual time=0.039..14.634 rows=25896 loops=1)
"  Order: _materialized_hypertable_6.""time"" DESC"
  Chunks excluded during startup: 0
  ->  Index Scan using _hyper_6_19_chunk__materialized_hypertable_6_asset_id_time_idx on _hyper_6_19_chunk  (cost=0.54..781.54 rows=18788 width=22) (actual time=0.038..6.513 rows=19272 loops=1)
"        Index Cond: ((asset_id = get_asset_id_stable('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)) AND (""time"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""time"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
        Filter: (name = 'value'::text)
  ->  Index Scan using _hyper_6_21_chunk__materialized_hypertable_6_asset_id_time_idx on _hyper_6_21_chunk  (cost=0.53..273.13 rows=6625 width=22) (actual time=0.081..6.187 rows=6624 loops=1)
"        Index Cond: ((asset_id = get_asset_id_stable('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)) AND (""time"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""time"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
        Filter: (name = 'value'::text)
Planning Time: 0.565 ms
Execution Time: 15.729 ms

Permission issues

Should you encounter a db query error: pq: permission denied for materialized view tag_summary_5m when querying from Grafana, grant the necessary permissions:

GRANT SELECT ON TABLE tag_summary_5m TO grafanareader;

Bonus: Real Time aggregates

To handle the need for up-to-the-minute data without frequent re-computations, TimescaleDB supports real-time aggregates, which utilize the materialized view for historical data and compute live aggregates for new data:

ALTER MATERIALIZED VIEW tag_summary_5m set (timescaledb.materialized_only = false);

Looking at the query plan:

Sort  (cost=3400.54..3464.10 rows=25425 width=22) (actual time=22.740..24.438 rows=25896 loops=1)
"  Sort Key: ""*SELECT* 1"".""time"" DESC"
  Sort Method: quicksort  Memory: 2792kB
  ->  Append  (cost=0.54..1540.19 rows=25425 width=22) (actual time=0.032..17.046 rows=25896 loops=1)
"        ->  Subquery Scan on ""*SELECT* 1""  (cost=0.54..1373.55 rows=25413 width=22) (actual time=0.031..15.210 rows=25896 loops=1)"
              ->  Custom Scan (ChunkAppend) on _materialized_hypertable_6  (cost=0.54..1119.42 rows=25413 width=26) (actual time=0.031..12.048 rows=25896 loops=1)
                    Chunks excluded during startup: 0
                    ->  Index Scan using _hyper_6_19_chunk__materialized_hypertable_6_asset_id_time_idx on _hyper_6_19_chunk  (cost=0.54..829.72 rows=18788 width=26) (actual time=0.030..5.280 rows=19272 loops=1)
"                          Index Cond: ((asset_id = get_asset_id_stable('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)) AND (""time"" < COALESCE(_timescaledb_internal.to_timestamp(_timescaledb_internal.cagg_watermark(6)), '-infinity'::timestamp with time zone)) AND (""time"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""time"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
                          Filter: (name = 'value'::text)
                    ->  Index Scan using _hyper_6_21_chunk__materialized_hypertable_6_asset_id_time_idx on _hyper_6_21_chunk  (cost=0.54..289.69 rows=6625 width=26) (actual time=0.113..4.815 rows=6624 loops=1)
"                          Index Cond: ((asset_id = get_asset_id_stable('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)) AND (""time"" < COALESCE(_timescaledb_internal.to_timestamp(_timescaledb_internal.cagg_watermark(6)), '-infinity'::timestamp with time zone)) AND (""time"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""time"" <= '2024-06-03 00:00:00+00'::timestamp with time zone))"
                          Filter: (name = 'value'::text)
"        ->  Subquery Scan on ""*SELECT* 2""  (cost=39.22..39.52 rows=12 width=23) (actual time=0.029..0.031 rows=0 loops=1)"
              ->  HashAggregate  (cost=39.22..39.40 rows=12 width=27) (actual time=0.028..0.030 rows=0 loops=1)
"                    Group Key: tag.asset_id, tag.name, time_bucket('00:05:00'::interval, tag.""timestamp"")"
                    Batches: 1  Memory Usage: 24kB
                    ->  Custom Scan (ChunkAppend) on tag  (cost=0.42..39.08 rows=14 width=23) (actual time=0.024..0.025 rows=0 loops=1)
                          Chunks excluded during startup: 13
                          ->  Index Scan Backward using _hyper_1_13_chunk_tag_timestamp_idx on _hyper_1_13_chunk  (cost=0.29..2.78 rows=1 width=22) (actual time=0.023..0.023 rows=0 loops=1)
"                                Index Cond: ((""timestamp"" >= COALESCE(_timescaledb_internal.to_timestamp(_timescaledb_internal.cagg_watermark(6)), '-infinity'::timestamp with time zone)) AND (""timestamp"" >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (""timestamp"" <= '2024-06-03 00:05:00+00'::timestamp with time zone))"
"                                Filter: ((name = 'value'::text) AND (time_bucket('00:05:00'::interval, ""timestamp"") >= '2024-03-05 00:00:00+00'::timestamp with time zone) AND (time_bucket('00:05:00'::interval, ""timestamp"") <= '2024-06-03 00:00:00+00'::timestamp with time zone) AND (asset_id = get_asset_id_stable('many'::text, 'data'::text, 'points'::text, '2'::text, ''::text, ''::text)))"
Planning Time: 2.783 ms
Execution Time: 26.252 ms

The execution time difference is only very slight.

Summary

With the implemented changes, query performance improved drastically (see also comparison below). Users can experience significantly faster dashboard loading times by upgrading to version 0.14.1 of our UMH Historian or by applying the discussed optimizations to their existing setups.

Benchmark & Comparison

Checkout the comparison between 4 different queries in the video below.

  • get_asset_id_immutable - CA 5m is a 5m timebucket continuous aggregate with our fast asset id lookup
  • get_asset_id_immutable - CA 3h is a 3h timebucket continuous aggregate with our fast asset id lookup
  • get_asset_id_immutable is just the optimized function without any continuous aggregate
  • get_asset_id is the old function

The table contains 1.288.090 datapoints for our asset and 7.769.229 datapoints for all assets combined.

0:00
/1:36

Comparison of different methods

base _stable _immutable ca + _immutable real-time ca + _immutable
Execution Time 5226ms 290ms 288ms 16ms 26ms
Speedup N/A ~18x ~18x ~327x ~201x
Live Data¹ yes yes yes no yes
Flexible aggregation² yes yes yes no no

¹: Always includes the latest datapoint, even if aggregated (real-time continous aggregate)
²: Continous aggregate and real-time continous aggregate only save one query, if you for example want to show the max instead of the avg, you need to create another continous aggregate

Benchmark results

Athena is our local development server & Dionysus is a Hetzner VM (CPX41) we use for testing.

All benchmarks where generated using fio-docker.

Flatcar@Athena

System Information at Mon Jun 3 10:41:11 UTC 2024

CPU Info:
CPU(s): 4
On-line CPU(s) list: 0-3
Model name: AMD Ryzen 7 5700G with Radeon Graphics
BIOS Model name: pc-q35-8.1 CPU @ 2.0GHz
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node0 CPU(s): 0-3

Full CPU Info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
BIOS Vendor ID: QEMU
Model name: AMD Ryzen 7 5700G with Radeon Graphics
BIOS Model name: pc-q35-8.1 CPU @ 2.0GHz
BIOS CPU family: 1
CPU family: 25
Model: 80
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
BogoMIPS: 7585.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Memory Info:
total used free shared buff/cache available
Mem: 15.6G 3.2G 7.6G 368.0M 4.8G 11.8G
Swap: 0 0 0

Disk Info:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 128G 0 disk
├─sda1 8:1 0 128M 0 part
├─sda2 8:2 0 2M 0 part
├─sda3 8:3 0 1G 0 part
├─sda4 8:4 0 1G 0 part
├─sda6 8:6 0 128M 0 part
├─sda7 8:7 0 64M 0 part
└─sda9 8:9 0 125.7G 0 part /etc/hosts
/etc/hostname
/etc/resolv.conf
sr0 11:0 1 1024M 0 rom

Filesystem Info:
Filesystem Size Used Available Use% Mounted on
overlay 117.8G 20.7G 92.0G 18% /
tmpfs 64.0M 0 64.0M 0% /dev
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 7.8G 160.0K 7.8G 0% /mnt/logs
/dev/sda9 117.8G 20.7G 92.0G 18% /etc/resolv.conf
/dev/sda9 117.8G 20.7G 92.0G 18% /etc/hostname
/dev/sda9 117.8G 20.7G 92.0G 18% /etc/hosts

Swap Info:

Running write test...
write_test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
write_test: Laying out IO file (1 file / 1024MiB)

write_test: (groupid=0, jobs=1): err= 0: pid=55: Mon Jun 3 10:41:13 2024
write: IOPS=544, BW=544MiB/s (571MB/s)(1024MiB/1882msec); 0 zone resets
slat (usec): min=53, max=12523, avg=317.41, stdev=637.98
clat (usec): min=2, max=47842, avg=1515.18, stdev=2959.71
lat (usec): min=846, max=48074, avg=1832.59, stdev=3002.82
clat percentiles (usec):
| 1.00th=[ 20], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 693],
| 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 742],
| 70.00th=[ 840], 80.00th=[ 1139], 90.00th=[ 2409], 95.00th=[ 6390],
| 99.00th=[12518], 99.50th=[21627], 99.90th=[25822], 99.95th=[47973],
| 99.99th=[47973]
bw ( KiB/s): min=505856, max=616031, per=100.00%, avg=578079.67, stdev=62574.57, samples=3
iops : min= 494, max= 601, avg=564.33, stdev=60.93, samples=3
lat (usec) : 4=0.20%, 10=0.68%, 20=0.20%, 50=0.29%, 100=0.29%
lat (usec) : 250=0.49%, 500=0.78%, 750=59.38%, 1000=17.09%
lat (msec) : 2=10.06%, 4=2.25%, 10=6.05%, 20=1.66%, 50=0.59%
cpu : usr=1.38%, sys=4.78%, ctx=1071, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=544MiB/s (571MB/s), 544MiB/s-544MiB/s (571MB/s-571MB/s), io=1024MiB (1074MB), run=1882-1882msec
Running read test...
read_test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
read_test: Laying out IO file (1 file / 1024MiB)

read_test: (groupid=0, jobs=1): err= 0: pid=61: Mon Jun 3 10:41:18 2024
read: IOPS=592, BW=593MiB/s (621MB/s)(1024MiB/1728msec)
slat (usec): min=32, max=16670, avg=234.36, stdev=723.62
clat (usec): min=4, max=23763, avg=1446.37, stdev=2243.55
lat (usec): min=841, max=23830, avg=1680.74, stdev=2365.56
clat percentiles (usec):
| 1.00th=[ 87], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 701],
| 30.00th=[ 709], 40.00th=[ 717], 50.00th=[ 725], 60.00th=[ 824],
| 70.00th=[ 865], 80.00th=[ 1090], 90.00th=[ 2769], 95.00th=[ 6128],
| 99.00th=[11994], 99.50th=[14877], 99.90th=[22152], 99.95th=[23725],
| 99.99th=[23725]
bw ( KiB/s): min=518144, max=684032, per=100.00%, avg=620487.67, stdev=89491.61, samples=3
iops : min= 506, max= 668, avg=605.67, stdev=87.21, samples=3
lat (usec) : 10=0.29%, 20=0.10%, 100=0.88%, 250=0.49%, 500=0.20%
lat (usec) : 750=51.76%, 1000=24.71%
lat (msec) : 2=10.35%, 4=3.32%, 10=6.45%, 20=1.27%, 50=0.20%
cpu : usr=0.17%, sys=4.69%, ctx=1026, majf=0, minf=266
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=593MiB/s (621MB/s), 593MiB/s-593MiB/s (621MB/s-621MB/s), io=1024MiB (1074MB), run=1728-1728msec
Running random read/write test...
randrw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
randrw_test: Laying out IO file (1 file / 1024MiB)

randrw_test: (groupid=0, jobs=1): err= 0: pid=67: Mon Jun 3 10:42:18 2024
read: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(716MiB/56701msec)
slat (usec): min=6, max=36938, avg=51.23, stdev=365.84
clat (nsec): min=1102, max=57460k, avg=179781.60, stdev=977741.57
lat (usec): min=53, max=57505, avg=231.02, stdev=1041.64
clat percentiles (nsec):
| 1.00th=[ 1608], 5.00th=[ 35584], 10.00th=[ 52480],
| 20.00th=[ 55552], 30.00th=[ 57600], 40.00th=[ 59136],
| 50.00th=[ 60672], 60.00th=[ 63232], 70.00th=[ 67072],
| 80.00th=[ 77312], 90.00th=[ 94720], 95.00th=[ 342016],
| 99.00th=[ 3817472], 99.50th=[ 6717440], 99.90th=[14352384],
| 99.95th=[18219008], 99.99th=[27394048]
bw ( KiB/s): min= 5421, max=18571, per=100.00%, avg=12945.33, stdev=2789.04, samples=112
iops : min= 1355, max= 4642, avg=3236.11, stdev=697.23, samples=112
write: IOPS=1388, BW=5554KiB/s (5687kB/s)(308MiB/56701msec); 0 zone resets
slat (usec): min=7, max=42759, avg=77.70, stdev=538.34
clat (nsec): min=1102, max=79655k, avg=95184.65, stdev=734871.65
lat (usec): min=40, max=79730, avg=172.88, stdev=906.98
clat percentiles (nsec):
| 1.00th=[ 1544], 5.00th=[ 1944], 10.00th=[ 4896],
| 20.00th=[ 29056], 30.00th=[ 30336], 40.00th=[ 32128],
| 50.00th=[ 34048], 60.00th=[ 36608], 70.00th=[ 39680],
| 80.00th=[ 48896], 90.00th=[ 60160], 95.00th=[ 78336],
| 99.00th=[ 970752], 99.50th=[ 3391488], 99.90th=[11599872],
| 99.95th=[15400960], 99.99th=[21626880]
bw ( KiB/s): min= 2003, max= 8244, per=100.00%, avg=5556.79, stdev=1222.47, samples=112
iops : min= 500, max= 2061, avg=1388.99, stdev=305.62, samples=112
lat (usec) : 2=3.38%, 4=2.35%, 10=0.71%, 20=0.12%, 50=23.30%
lat (usec) : 100=62.73%, 250=2.83%, 500=0.92%, 750=1.47%, 1000=0.59%
lat (msec) : 2=0.47%, 4=0.32%, 10=0.60%, 20=0.17%, 50=0.03%
lat (msec) : 100=0.01%
cpu : usr=1.63%, sys=11.34%, ctx=248645, majf=0, minf=12
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=183413,78731,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=716MiB (751MB), run=56701-56701msec
WRITE: bw=5554KiB/s (5687kB/s), 5554KiB/s-5554KiB/s (5687kB/s-5687kB/s), io=308MiB (322MB), run=56701-56701msec
Running sequential read test...
seqread_test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
seqread_test: Laying out IO file (1 file / 1024MiB)

seqread_test: (groupid=0, jobs=1): err= 0: pid=73: Mon Jun 3 10:42:23 2024
read: IOPS=520, BW=520MiB/s (545MB/s)(1024MiB/1969msec)
slat (usec): min=24, max=4736, avg=118.54, stdev=167.91
clat (usec): min=2, max=42457, avg=1799.54, stdev=3495.40
lat (usec): min=788, max=42485, avg=1918.09, stdev=3488.07
clat percentiles (usec):
| 1.00th=[ 644], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 717],
| 30.00th=[ 725], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 791],
| 70.00th=[ 865], 80.00th=[ 1106], 90.00th=[ 3916], 95.00th=[ 7898],
| 99.00th=[19268], 99.50th=[23987], 99.90th=[28443], 99.95th=[42206],
| 99.99th=[42206]
bw ( KiB/s): min=419840, max=646000, per=95.72%, avg=509756.67, stdev=119986.28, samples=3
iops : min= 410, max= 630, avg=497.33, stdev=116.80, samples=3
lat (usec) : 4=0.10%, 10=0.29%, 20=0.10%, 750=49.71%, 1000=25.39%
lat (msec) : 2=11.62%, 4=3.12%, 10=6.15%, 20=2.64%, 50=0.88%
cpu : usr=0.36%, sys=3.15%, ctx=1020, majf=0, minf=266
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=520MiB/s (545MB/s), 520MiB/s-520MiB/s (545MB/s-545MB/s), io=1024MiB (1074MB), run=1969-1969msec
Running sequential write test...
seqwrite_test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
seqwrite_test: Laying out IO file (1 file / 1024MiB)

seqwrite_test: (groupid=0, jobs=1): err= 0: pid=79: Mon Jun 3 10:42:26 2024
write: IOPS=464, BW=465MiB/s (487MB/s)(1024MiB/2204msec); 0 zone resets
slat (usec): min=50, max=8160, avg=264.25, stdev=460.98
clat (usec): min=3, max=37419, avg=1872.26, stdev=3358.32
lat (usec): min=842, max=37537, avg=2136.52, stdev=3368.34
clat percentiles (usec):
| 1.00th=[ 11], 5.00th=[ 652], 10.00th=[ 685], 20.00th=[ 701],
| 30.00th=[ 709], 40.00th=[ 742], 50.00th=[ 816], 60.00th=[ 889],
| 70.00th=[ 1037], 80.00th=[ 1483], 90.00th=[ 4555], 95.00th=[ 7373],
| 99.00th=[18220], 99.50th=[25822], 99.90th=[30540], 99.95th=[37487],
| 99.99th=[37487]
bw ( KiB/s): min=398562, max=502802, per=94.39%, avg=449085.00, stdev=54676.33, samples=4
iops : min= 389, max= 491, avg=438.50, stdev=53.46, samples=4
lat (usec) : 4=0.20%, 10=0.78%, 20=0.10%, 50=0.49%, 100=1.37%
lat (usec) : 250=1.07%, 500=0.10%, 750=37.60%, 1000=26.37%
lat (msec) : 2=16.41%, 4=4.30%, 10=8.40%, 20=1.95%, 50=0.88%
cpu : usr=1.13%, sys=3.31%, ctx=1046, majf=0, minf=11
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=465MiB/s (487MB/s), 465MiB/s-465MiB/s (487MB/s-487MB/s), io=1024MiB (1074MB), run=2204-2204msec
Running IOPS test...
iops_test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.36
Starting 1 process
iops_test: Laying out IO file (1 file / 1024MiB)

iops_test: (groupid=0, jobs=1): err= 0: pid=85: Mon Jun 3 10:42:41 2024
read: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(1024MiB/12022msec)
slat (usec): min=2, max=45405, avg=30.07, stdev=342.58
clat (usec): min=66, max=60105, avg=2902.76, stdev=4016.19
lat (usec): min=101, max=60183, avg=2932.83, stdev=4042.74
clat percentiles (usec):
| 1.00th=[ 322], 5.00th=[ 408], 10.00th=[ 482], 20.00th=[ 742],
| 30.00th=[ 906], 40.00th=[ 1074], 50.00th=[ 1483], 60.00th=[ 2409],
| 70.00th=[ 2769], 80.00th=[ 3326], 90.00th=[ 7046], 95.00th=[11076],
| 99.00th=[19268], 99.50th=[23725], 99.90th=[36439], 99.95th=[44303],
| 99.99th=[56886]
bw ( KiB/s): min=44856, max=121357, per=99.44%, avg=86737.04, stdev=21467.30, samples=23
iops : min=11214, max=30339, avg=21683.96, stdev=5366.73, samples=23
lat (usec) : 100=0.01%, 250=0.08%, 500=11.02%, 750=9.25%, 1000=15.90%
lat (msec) : 2=17.45%, 4=30.01%, 10=10.29%, 20=5.08%, 50=0.90%
lat (msec) : 100=0.03%
cpu : usr=4.03%, sys=22.24%, ctx=4376, majf=0, minf=72
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=1024MiB (1074MB), run=12022-12022msec
Running CPU benchmark...
sysbench 1.0.20 (using system LuaJIT 2.1.1710398010)

Running the test with following options:
Number of threads: 1
Initializing random number generator from current time

Prime numbers limit: 20000

Initializing worker threads...

Threads started!

CPU speed:
events per second: 674.57

General statistics:
total time: 10.0149s
total number of events: 6756

Latency (ms):
min: 0.66
avg: 1.48
max: 37.13
95th percentile: 4.82
sum: 10000.82

Threads fairness:
events (avg/stddev): 6756.0000/0.00
execution time (avg/stddev): 10.0008/0.00

Running memory benchmark...
sysbench 1.0.20 (using system LuaJIT 2.1.1710398010)

Running the test with following options:
Number of threads: 1
Initializing random number generator from current time

Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global

Initializing worker threads...

Threads started!

Total operations: 19850825 (1984613.15 per second)

19385.57 MiB transferred (1938.10 MiB/sec)

General statistics:
total time: 10.0019s
total number of events: 19850825

Latency (ms):
min: 0.00
avg: 0.00
max: 47.98
95th percentile: 0.00
sum: 5006.19

Threads fairness:
events (avg/stddev): 19850825.0000/0.00
execution time (avg/stddev): 5.0062/0.00

Ubuntu@Dionysus

System Information at Mon Jun 3 10:45:22 UTC 2024

CPU Info:
CPU(s): 8
On-line CPU(s) list: 0-7
Model name: AMD EPYC Processor
BIOS Model name: NotSpecified CPU @ 2.0GHz
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node0 CPU(s): 0-7

Full CPU Info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
BIOS Vendor ID: QEMU
Model name: AMD EPYC Processor
BIOS Model name: NotSpecified CPU @ 2.0GHz
BIOS CPU family: 1
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
BogoMIPS: 4890.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat umip rdpid arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Memory Info:
total used free shared buff/cache available
Mem: 15.2G 2.5G 3.4G 7.1M 9.4G 12.5G
Swap: 0 0 0

Disk Info:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 228.9G 0 disk
├─sda1 8:1 0 228.6G 0 part /etc/hosts
│ /etc/hostname
│ /etc/resolv.conf
│ /mnt/logs
├─sda14 8:14 0 1M 0 part
└─sda15 8:15 0 256M 0 part
sr0 11:0 1 2K 0 rom

Filesystem Info:
Filesystem Size Used Available Use% Mounted on
overlay 225.0G 77.1G 138.8G 36% /
tmpfs 64.0M 0 64.0M 0% /dev
shm 64.0M 0 64.0M 0% /dev/shm
/dev/sda1 225.0G 77.1G 138.8G 36% /mnt/logs
/dev/sda1 225.0G 77.1G 138.8G 36% /etc/resolv.conf
/dev/sda1 225.0G 77.1G 138.8G 36% /etc/hostname
/dev/sda1 225.0G 77.1G 138.8G 36% /etc/hosts

Swap Info:

Running write test...
write_test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
write_test: Laying out IO file (1 file / 1024MiB)

write_test: (groupid=0, jobs=1): err= 0: pid=55: Mon Jun 3 10:45:23 2024
write: IOPS=1278, BW=1278MiB/s (1341MB/s)(1024MiB/801msec); 0 zone resets
slat (usec): min=45, max=219, avg=69.91, stdev=16.92
clat (usec): min=471, max=7401, avg=709.47, stdev=367.74
lat (usec): min=528, max=7468, avg=779.38, stdev=368.95
clat percentiles (usec):
| 1.00th=[ 490], 5.00th=[ 519], 10.00th=[ 537], 20.00th=[ 553],
| 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 644],
| 70.00th=[ 676], 80.00th=[ 750], 90.00th=[ 930], 95.00th=[ 1156],
| 99.00th=[ 2114], 99.50th=[ 3097], 99.90th=[ 3720], 99.95th=[ 7373],
| 99.99th=[ 7373]
bw ( MiB/s): min= 1298, max= 1298, per=100.00%, avg=1298.00, stdev= 0.00, samples=1
iops : min= 1298, max= 1298, avg=1298.00, stdev= 0.00, samples=1
lat (usec) : 500=1.86%, 750=78.22%, 1000=11.52%
lat (msec) : 2=7.13%, 4=1.17%, 10=0.10%
cpu : usr=1.38%, sys=8.62%, ctx=1025, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=1278MiB/s (1341MB/s), 1278MiB/s-1278MiB/s (1341MB/s-1341MB/s), io=1024MiB (1074MB), run=801-801msec
Running read test...
read_test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
read_test: Laying out IO file (1 file / 1024MiB)

read_test: (groupid=0, jobs=1): err= 0: pid=61: Mon Jun 3 10:45:26 2024
read: IOPS=1304, BW=1304MiB/s (1368MB/s)(1024MiB/785msec)
slat (usec): min=45, max=658, avg=64.31, stdev=23.48
clat (usec): min=503, max=14377, avg=699.97, stdev=855.81
lat (usec): min=569, max=14440, avg=764.28, stdev=855.99
clat percentiles (usec):
| 1.00th=[ 523], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 570],
| 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 644],
| 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 807],
| 99.00th=[ 1237], 99.50th=[ 1942], 99.90th=[14353], 99.95th=[14353],
| 99.99th=[14353]
bw ( MiB/s): min= 1278, max= 1278, per=97.97%, avg=1278.00, stdev= 0.00, samples=1
iops : min= 1278, max= 1278, avg=1278.00, stdev= 0.00, samples=1
lat (usec) : 750=89.65%, 1000=8.40%
lat (msec) : 2=1.46%, 4=0.10%, 20=0.39%
cpu : usr=0.77%, sys=12.50%, ctx=1024, majf=0, minf=266
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=1304MiB/s (1368MB/s), 1304MiB/s-1304MiB/s (1368MB/s-1368MB/s), io=1024MiB (1074MB), run=785-785msec
Running random read/write test...
randrw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
randrw_test: Laying out IO file (1 file / 1024MiB)

randrw_test: (groupid=0, jobs=1): err= 0: pid=67: Mon Jun 3 10:46:15 2024
read: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(716MiB/46873msec)
slat (usec): min=10, max=281, avg=17.15, stdev= 5.71
clat (usec): min=64, max=14617, avg=182.21, stdev=78.62
lat (usec): min=96, max=14630, avg=199.37, stdev=79.10
clat percentiles (usec):
| 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149],
| 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 182],
| 70.00th=[ 192], 80.00th=[ 210], 90.00th=[ 237], 95.00th=[ 253],
| 99.00th=[ 326], 99.50th=[ 412], 99.90th=[ 578], 99.95th=[ 660],
| 99.99th=[ 1090]
bw ( KiB/s): min=14336, max=17016, per=99.98%, avg=15649.63, stdev=509.45, samples=93
iops : min= 3584, max= 4254, avg=3912.41, stdev=127.36, samples=93
write: IOPS=1679, BW=6719KiB/s (6880kB/s)(308MiB/46873msec); 0 zone resets
slat (usec): min=11, max=533, avg=18.93, stdev= 6.51
clat (usec): min=2, max=13570, avg=105.37, stdev=73.86
lat (usec): min=85, max=13587, avg=124.30, stdev=74.51
clat percentiles (usec):
| 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 92],
| 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 104],
| 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 124], 95.00th=[ 135],
| 99.00th=[ 174], 99.50th=[ 206], 99.90th=[ 367], 99.95th=[ 635],
| 99.99th=[ 1811]
bw ( KiB/s): min= 5928, max= 7792, per=99.99%, avg=6718.19, stdev=344.93, samples=93
iops : min= 1482, max= 1948, avg=1679.55, stdev=86.23, samples=93
lat (usec) : 4=0.01%, 50=0.01%, 100=15.15%, 250=80.72%, 500=3.97%
lat (usec) : 750=0.13%, 1000=0.02%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=2.61%, sys=14.84%, ctx=262147, majf=0, minf=12
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=183413,78731,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=15.3MiB/s (16.0MB/s), 15.3MiB/s-15.3MiB/s (16.0MB/s-16.0MB/s), io=716MiB (751MB), run=46873-46873msec
WRITE: bw=6719KiB/s (6880kB/s), 6719KiB/s-6719KiB/s (6880kB/s-6880kB/s), io=308MiB (322MB), run=46873-46873msec
Running sequential read test...
seqread_test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
seqread_test: Laying out IO file (1 file / 1024MiB)

seqread_test: (groupid=0, jobs=1): err= 0: pid=73: Mon Jun 3 10:46:18 2024
read: IOPS=1247, BW=1247MiB/s (1308MB/s)(1024MiB/821msec)
slat (usec): min=51, max=717, avg=69.38, stdev=24.75
clat (usec): min=487, max=16380, avg=729.75, stdev=574.87
lat (usec): min=572, max=16462, avg=799.13, stdev=576.02
clat percentiles (usec):
| 1.00th=[ 529], 5.00th=[ 562], 10.00th=[ 586], 20.00th=[ 619],
| 30.00th=[ 652], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 709],
| 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 857],
| 99.00th=[ 1205], 99.50th=[ 1631], 99.90th=[ 5538], 99.95th=[16319],
| 99.99th=[16319]
bw ( MiB/s): min= 1282, max= 1282, per=100.00%, avg=1282.00, stdev= 0.00, samples=1
iops : min= 1282, max= 1282, avg=1282.00, stdev= 0.00, samples=1
lat (usec) : 500=0.10%, 750=78.03%, 1000=19.92%
lat (msec) : 2=1.46%, 10=0.39%, 20=0.10%
cpu : usr=1.22%, sys=11.95%, ctx=1024, majf=0, minf=265
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=1247MiB/s (1308MB/s), 1247MiB/s-1247MiB/s (1308MB/s-1308MB/s), io=1024MiB (1074MB), run=821-821msec
Running sequential write test...
seqwrite_test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
seqwrite_test: Laying out IO file (1 file / 1024MiB)

seqwrite_test: (groupid=0, jobs=1): err= 0: pid=79: Mon Jun 3 10:46:19 2024
write: IOPS=1232, BW=1232MiB/s (1292MB/s)(1024MiB/831msec); 0 zone resets
slat (usec): min=50, max=223, avg=75.29, stdev=17.56
clat (usec): min=469, max=27303, avg=733.80, stdev=1376.75
lat (usec): min=532, max=27391, avg=809.08, stdev=1377.92
clat percentiles (usec):
| 1.00th=[ 490], 5.00th=[ 506], 10.00th=[ 519], 20.00th=[ 537],
| 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619],
| 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 775], 95.00th=[ 840],
| 99.00th=[ 1434], 99.50th=[14484], 99.90th=[15401], 99.95th=[27395],
| 99.99th=[27395]
bw ( MiB/s): min= 1346, max= 1346, per=100.00%, avg=1346.00, stdev= 0.00, samples=1
iops : min= 1346, max= 1346, avg=1346.00, stdev= 0.00, samples=1
lat (usec) : 500=3.32%, 750=85.35%, 1000=9.57%
lat (msec) : 2=0.98%, 4=0.10%, 20=0.59%, 50=0.10%
cpu : usr=2.17%, sys=8.19%, ctx=1024, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=1232MiB/s (1292MB/s), 1232MiB/s-1232MiB/s (1292MB/s-1292MB/s), io=1024MiB (1074MB), run=831-831msec
Running IOPS test...
iops_test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.36
Starting 1 process
iops_test: Laying out IO file (1 file / 1024MiB)

iops_test: (groupid=0, jobs=1): err= 0: pid=85: Mon Jun 3 10:46:25 2024
read: IOPS=65.5k, BW=256MiB/s (268MB/s)(1024MiB/4000msec)
slat (usec): min=4, max=518, avg= 5.83, stdev= 4.95
clat (usec): min=340, max=4268, avg=969.86, stdev=185.49
lat (usec): min=345, max=4273, avg=975.69, stdev=185.44
clat percentiles (usec):
| 1.00th=[ 627], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 848],
| 30.00th=[ 881], 40.00th=[ 906], 50.00th=[ 930], 60.00th=[ 963],
| 70.00th=[ 1004], 80.00th=[ 1074], 90.00th=[ 1205], 95.00th=[ 1319],
| 99.00th=[ 1598], 99.50th=[ 1729], 99.90th=[ 2024], 99.95th=[ 2147],
| 99.99th=[ 4047]
bw ( KiB/s): min=261848, max=272720, per=100.00%, avg=266332.57, stdev=3802.85, samples=7
iops : min=65462, max=68180, avg=66583.14, stdev=951.11, samples=7
lat (usec) : 500=0.18%, 750=4.07%, 1000=65.76%
lat (msec) : 2=29.88%, 4=0.10%, 10=0.01%
cpu : usr=10.05%, sys=48.09%, ctx=10108, majf=0, minf=72
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=256MiB/s (268MB/s), 256MiB/s-256MiB/s (268MB/s-268MB/s), io=1024MiB (1074MB), run=4000-4000msec
Running CPU benchmark...
sysbench 1.0.20 (using system LuaJIT 2.1.1710398010)

Running the test with following options:
Number of threads: 1
Initializing random number generator from current time

Prime numbers limit: 20000

Initializing worker threads...

Threads started!

CPU speed:
events per second: 1306.85

General statistics:
total time: 10.0003s
total number of events: 13070

Latency (ms):
min: 0.72
avg: 0.76
max: 1.26
95th percentile: 0.92
sum: 9992.69

Threads fairness:
events (avg/stddev): 13070.0000/0.00
execution time (avg/stddev): 9.9927/0.00

Running memory benchmark...
sysbench 1.0.20 (using system LuaJIT 2.1.1710398010)

Running the test with following options:
Number of threads: 1
Initializing random number generator from current time

Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global

Initializing worker threads...

Threads started!

Total operations: 45603628 (4559891.85 per second)

44534.79 MiB transferred (4453.02 MiB/sec)

General statistics:
total time: 10.0001s
total number of events: 45603628

Latency (ms):
min: 0.00
avg: 0.00
max: 0.29
95th percentile: 0.00
sum: 4461.99

Threads fairness:
events (avg/stddev): 45603628.0000/0.00
execution time (avg/stddev): 4.4620/0.00

Stay up-to-date

Subscribe to the UMH Learning Hub Newsletter to receive the latest updates and gain early access to our blog posts.

Subscribe