《Netty官方文档》基准测试
原文链接 译者:lijunshu
Netty有一个模块叫’netty-microbench’,我们可以用他来执行一系列的微型基准测试。Netty-microbench是基于OpenJDK JMH构件的(HotSpot的推荐基准测试方案)。当你开始netty基准测试时,你不需要额外的依赖。
运行基准测试
你可以通过maven 命令行或者直接从IDE开始你的基准测试,如果以默认设置开始跑测试,你可以使用命令行mvn -DskipTests=false。我们设置skipTests=false 是因为我们不想以单元测试的方式去运行这些基准测试。
如果没有问题,你将为看到JMH开始warmup然后基于fork数量来运行基准测试,并提供给你漂亮的报告。你会看到一般情况下基准测试的结果如下。
[code]
# Fork: 2 of 2
# Warmup: 10 iterations, 1 s each
# Measurement: 10 iterations, 1 s each
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Running: io.netty.microbench.buffer.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_1_0
# Warmup Iteration 1: 8454.103 ops/ms
# Warmup Iteration 2: 11551.524 ops/ms
# Warmup Iteration 3: 11677.575 ops/ms
# Warmup Iteration 4: 11404.954 ops/ms
# Warmup Iteration 5: 11553.299 ops/ms
# Warmup Iteration 6: 11514.766 ops/ms
# Warmup Iteration 7: 11661.768 ops/ms
# Warmup Iteration 8: 11667.577 ops/ms
# Warmup Iteration 9: 11551.240 ops/ms
# Warmup Iteration 10: 11692.991 ops/ms
Iteration 1: 11633.877 ops/ms
Iteration 2: 11740.063 ops/ms
Iteration 3: 11751.798 ops/ms
Iteration 4: 11260.071 ops/ms
Iteration 5: 11461.010 ops/ms
Iteration 6: 11642.912 ops/ms
Iteration 7: 11808.595 ops/ms
Iteration 8: 11683.780 ops/ms
Iteration 9: 11750.292 ops/ms
Iteration 10: 11769.986 ops/ms
Result : 11650.238 ±(99.9%) 229.698 ops/ms
Statistics: (min, avg, max) = (11260.071, 11650.238, 11808.595), stdev = 169.080
Confidence interval (99.9%): [11420.540, 11879.937]
[/code]
最终,你的测试结果看上去和这个相似,这个更多的取决于你的系统配置。
[code]
Benchmark Mode Samples Mean Mean error Units
i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_1_0 thrpt 20 11658.812 120.728 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_2_256 thrpt 20 10308.626 147.528 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_3_1024 thrpt 20 8855.815 55.933 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_4_4096 thrpt 20 5545.538 1279.721 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_5_16384 thrpt 20 6741.581 75.975 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree_6_65536 thrpt 20 7252.869 70.609 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree_1_0 thrpt 20 9750.225 73.900 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree_2_256 thrpt 20 9936.639 657.818 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree_3_1024 thrpt 20 8903.130 197.533 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree_4_4096 thrpt 20 6664.157 74.163 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree_5_16384 thrpt 20 6374.924 337.869 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree_6_65536 thrpt 20 6386.337 44.960 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree_1_0 thrpt 20 2137.241 30.792 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree_2_256 thrpt 20 1873.727 41.843 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree_3_1024 thrpt 20 1902.025 34.473 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree_4_4096 thrpt 20 1534.347 20.509 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree_5_16384 thrpt 20 838.804 12.575 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree_6_65536 thrpt 20 276.976 3.021 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree_1_0 thrpt 20 35820.568 259.187 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree_2_256 thrpt 20 19660.951 295.012 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree_3_1024 thrpt 20 6264.614 77.704 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree_4_4096 thrpt 20 2921.598 95.492 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree_5_16384 thrpt 20 991.631 49.220 ops/ms
i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree_6_65536 thrpt 20 261.718 11.108 ops/ms
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 993.382 sec – in io.netty.microbench.buffer.ByteBufAllocatorBenchmark
[/code]
你也可以直接从IDE运行基准测试。如果你导入了netty的最顶层项目,打开子项目microbench到子目录src/test/java/io/netty/microbench 。在buffer子目录,你可以像其它JUnit测试一样运行ByteBufAllocatorBenchmark 。主要的区别就是,你可以一下子运行全部的基准测试,而不是去一个个的运行每一个基准测试。如果maven一样,你可以在控制台看到同样的输出。
编写基准测试
虽然编写基准测试不是太麻烦,但是却不一定编写的正确。这不是因为microbench项目很难使用,而是因为你很难去避免一些常用的陷阱。因此,JMH提供了有用的annotation和特性来帮助你避免大多数的陷阱。为此,你需要使你的基准继承自AbstractMicrobenchmark,AbstractMicrobenchmark能保证以JUnit默认参赛的方式运行。
[code]
public class MyBenchmark extends AbstractMicrobenchmark {
}
[/code]
下一步是创建一个方法以@GenerateMicroBenchmark 标注,然后给他一个合适的方法名。
[code]
@GenerateMicroBenchmark
public void measureSomethingHere() {
}
[/code]
看这些样例是很好的主意能启发你如何去写JMH测试。你也可以看下JMH主要作者的一些座谈。
定制运行参数
默认的AbstractMicrobenchmark配置是
- Warmup次数 10
- 测试次数 10
- Fork数量 2
这些配置可以通过系统配置在运行的时候做配置
mvn -DskipTests=false -DwarmupIterations=2 -DmeasureIterations=3 -Dforks=1 test
需要注意通常并不建议跑测试时,用较少的循环次数,但是较少的次数有助于确认基准测试时工作的,在确认结束后,再运行大量的基准测试。
[code]
@Warmup(iterations = 20)
@Fork(1)
public class MyBenchmark extends AbstractMicrobenchmark {
}
[/code]
这可以以方法级别或者类级别来运行基准测试,命令行的参数会覆盖annotation上的参数。
原创文章,转载请注明: 转载自并发编程网 – ifeve.com本文链接地址: 《Netty官方文档》基准测试
暂无评论