-
Notifications
You must be signed in to change notification settings - Fork 1
Types of Keys
Any type of key can be given. Underlying key-value storage is a circular buffer mapped by a Map
. Cache-miss performance with Map is better than using plain object for mapping from keys to values. Circular buffer is garbage-collection-friendly.
Benchmark for 1000000 keys (in milliseconds unit):
(test system is FX8150 at 2.1GHz + 1 channel ddr3 memory at 1600MHz + Ubuntu 18.04 LTS 64bit)
"use strict";
let Lru = require("./lrucache.js").Lru;
let miss=0;
let hit_miss = 0;
let expiretime=100000000;
let cache = new Lru(10000 /* enough to hold 100x100 pixels */, async function(key,callback){
// cache-read-miss function
miss++;
callback(key);
}, expiretime /* , cache-write-miss function with parameters (key,value,callback) if cache.set(..) will be used */);
let rep = 0;
function test()
{
miss=0;
let ctr=0;
let t = Date.now();
for(let i=0;i<1000000;i++)
cache.get(Math.floor(Math.random()*1000000).toString(),function(result){
ctr++;
if(ctr==1000000){
console.log((Date.now()-t)+"ms cache hit="+(100*(1000000-miss)/1000000)+"%");
if(rep<10) { rep++; test(); }
}
});
}
test();
Integer keys, all cache-hits (including Math function computation latencies):
704ms cache hit=99.9%
1280ms cache hit=100%
1259ms cache hit=100%
647ms cache hit=100%
641ms cache hit=100%
642ms cache hit=100%
638ms cache hit=100%
656ms cache hit=100%
646ms cache hit=100%
648ms cache hit=100%
643ms cache hit=100%
String keys, all cache-hits:
769ms cache hit=99.9%
1430ms cache hit=100%
1383ms cache hit=100%
707ms cache hit=100%
722ms cache hit=100%
707ms cache hit=100%
723ms cache hit=100%
731ms cache hit=100%
729ms cache hit=100%
718ms cache hit=100%
716ms cache hit=100%
Object keys, all cache-hits:
636ms cache hit=99.9%
981ms cache hit=100%
975ms cache hit=100%
561ms cache hit=100%
568ms cache hit=100%
573ms cache hit=100%
564ms cache hit=100%
556ms cache hit=100%
566ms cache hit=100%
560ms cache hit=100%
566ms cache hit=100%
Seems like object keys are fastest but I had to add an array to hold their references so that they are not anonymous/temporary for the cache (which makes them unique for each even if they are same stringify result).
Integer keys, 99% cache-misses:
1464ms cache hit=0.988%
1806ms cache hit=1.0018%
1921ms cache hit=1.0045%
1221ms cache hit=0.9981%
1374ms cache hit=1.0122%
1189ms cache hit=1.0092%
1366ms cache hit=1.0006%
1219ms cache hit=1.0184%
1199ms cache hit=0.9956%
1184ms cache hit=1.0009%
1184ms cache hit=1.0013%
String keys, 99% cache-misses:
1879ms cache hit=0.9889%
2484ms cache hit=0.9998%
1716ms cache hit=0.9883%
1933ms cache hit=0.996%
1701ms cache hit=1.0255%
1847ms cache hit=0.9996%
1688ms cache hit=0.996%
1890ms cache hit=0.9981%
1676ms cache hit=1.0122%
1886ms cache hit=1.0199%
1693ms cache hit=0.9989%
Since more bytes are involved with strings, it is normal to have memory-intensive functions slowed-down.
Object keys, 100% cache-misses:
1654ms cache hit=0%
1628ms cache hit=0%
1176ms cache hit=0%
1337ms cache hit=0%
1248ms cache hit=0%
1310ms cache hit=0%
1305ms cache hit=0%
1297ms cache hit=0%
1270ms cache hit=0%
1295ms cache hit=0%
1290ms cache hit=0%
If FX8150 at 2.1GHz + 1 channel memory can do 2000 cache-hits per millisecond for object keys, then a new CPU at 4.2GHz should do at least 5000 cache-hits per millisecond. Also cache-miss performance of "780 operations per millisecond" shouldn't be a problem for
- file-reading from NVME/SSD
- socket I/O
- downloading web documents
- updating/reading a database