๐ Characterizing a workload and recreating it as a synthetic benchmark
๐ก Newskategorie: Linux Tipps
๐ Quelle: reddit.com
We have several workloads that we run in a Linux compute pool on homogenous systems. These are relatively high volume, and potentially long workloads with expensive licenses, so when it comes time to purchase new servers there are significant savings to be realized from getting the right server for our workload.
I'd like to generate a synthetic benchmark using, for example, stress-ng that I could use to provide a reasonable approximation of the CPU, memory (ideally including cache misses, etc..), disk, etc.. of this workload to simplify benchmarking across systems. We aren't a large enough customer that our server provider will bring different samples in house, and the software licenses forbid us from running off-prem on cloud resources where I could try different systems.
Does anyone know of an automated way, or a well documented way, of profiling a flow and generating a stress-ng configuration that would give a decent first order approximation of relative performance between systems?
[link] [comments] ...