到目前为止,我已经避免了测试多线程代码的噩梦,因为它似乎是一个太大的雷区。我想知道人们是如何测试依赖于线程的代码以获得成功执行的,或者人们是如何测试那些仅在两个线程以给定方式交互时才会出现的问题的?
对于今天的程序员来说,这似乎是一个非常关键的问题,恕我直言,将我们的知识集中在这个问题上是很有用的。
到目前为止,我已经避免了测试多线程代码的噩梦,因为它似乎是一个太大的雷区。我想知道人们是如何测试依赖于线程的代码以获得成功执行的,或者人们是如何测试那些仅在两个线程以给定方式交互时才会出现的问题的?
对于今天的程序员来说,这似乎是一个非常关键的问题,恕我直言,将我们的知识集中在这个问题上是很有用的。
当前回答
有一些很好的工具。下面是一些Java的摘要。
一些好的静态分析工具包括FindBugs(提供了一些有用的提示)、JLint、Java Pathfinder (JPF & JPF2)和Bogor。
multithreaddtc是一个非常好的动态分析工具(集成到JUnit中),您必须在其中设置自己的测试用例。
IBM研究院的竞赛很有趣。它通过插入各种线程修改行为(例如sleep & yield)来检测你的代码,试图随机发现错误。
SPIN是对Java(和其他)组件建模的一个非常酷的工具,但是您需要一些有用的框架。它很难使用,但如果你知道如何使用它,它是非常强大的。相当多的工具在底层使用SPIN。
multithreaddtc可能是最主流的,但是上面列出的一些静态分析工具绝对值得一看。
其他回答
对于Java,请参阅JCIP的第12章。有一些具体的例子,可以编写确定性的多线程单元测试,以至少测试并发代码的正确性和不变量。
用单元测试“证明”线程安全要危险得多。我相信在各种平台/配置上进行自动化集成测试会更好。
我在测试多线程代码时也遇到了严重的问题。然后我在Gerard Meszaros的“xUnit测试模式”中找到了一个非常酷的解决方案。他描述的模式被称为Humble object。
基本上,它描述了如何将逻辑提取到独立的、易于测试的组件中,该组件与环境解耦。在你测试了这个逻辑之后,你可以测试复杂的行为(多线程,异步执行,等等…)
(如果可能的话)不要使用线程,使用actor /活动对象。易于测试。
我做过很多这样的事,的确很糟糕。
一些建议:
GroboUtils for running multiple test threads alphaWorks ConTest to instrument classes to cause interleavings to vary between iterations Create a throwable field and check it in tearDown (see Listing 1). If you catch a bad exception in another thread, just assign it to throwable. I created the utils class in Listing 2 and have found it invaluable, especially waitForVerify and waitForCondition, which will greatly increase the performance of your tests. Make good use of AtomicBoolean in your tests. It is thread safe, and you'll often need a final reference type to store values from callback classes and suchlike. See example in Listing 3. Make sure to always give your test a timeout (e.g., @Test(timeout=60*1000)), as concurrency tests can sometimes hang forever when they're broken.
清单1:
@After
public void tearDown() {
if ( throwable != null )
throw throwable;
}
清单2:
import static org.junit.Assert.fail;
import java.io.File;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
import java.util.Random;
import org.apache.commons.collections.Closure;
import org.apache.commons.collections.Predicate;
import org.apache.commons.lang.time.StopWatch;
import org.easymock.EasyMock;
import org.easymock.classextension.internal.ClassExtensionHelper;
import static org.easymock.classextension.EasyMock.*;
import ca.digitalrapids.io.DRFileUtils;
/**
* Various utilities for testing
*/
public abstract class DRTestUtils
{
static private Random random = new Random();
/** Calls {@link #waitForCondition(Integer, Integer, Predicate, String)} with
* default max wait and check period values.
*/
static public void waitForCondition(Predicate predicate, String errorMessage)
throws Throwable
{
waitForCondition(null, null, predicate, errorMessage);
}
/** Blocks until a condition is true, throwing an {@link AssertionError} if
* it does not become true during a given max time.
* @param maxWait_ms max time to wait for true condition. Optional; defaults
* to 30 * 1000 ms (30 seconds).
* @param checkPeriod_ms period at which to try the condition. Optional; defaults
* to 100 ms.
* @param predicate the condition
* @param errorMessage message use in the {@link AssertionError}
* @throws Throwable on {@link AssertionError} or any other exception/error
*/
static public void waitForCondition(Integer maxWait_ms, Integer checkPeriod_ms,
Predicate predicate, String errorMessage) throws Throwable
{
waitForCondition(maxWait_ms, checkPeriod_ms, predicate, new Closure() {
public void execute(Object errorMessage)
{
fail((String)errorMessage);
}
}, errorMessage);
}
/** Blocks until a condition is true, running a closure if
* it does not become true during a given max time.
* @param maxWait_ms max time to wait for true condition. Optional; defaults
* to 30 * 1000 ms (30 seconds).
* @param checkPeriod_ms period at which to try the condition. Optional; defaults
* to 100 ms.
* @param predicate the condition
* @param closure closure to run
* @param argument argument for closure
* @throws Throwable on {@link AssertionError} or any other exception/error
*/
static public void waitForCondition(Integer maxWait_ms, Integer checkPeriod_ms,
Predicate predicate, Closure closure, Object argument) throws Throwable
{
if ( maxWait_ms == null )
maxWait_ms = 30 * 1000;
if ( checkPeriod_ms == null )
checkPeriod_ms = 100;
StopWatch stopWatch = new StopWatch();
stopWatch.start();
while ( !predicate.evaluate(null) ) {
Thread.sleep(checkPeriod_ms);
if ( stopWatch.getTime() > maxWait_ms ) {
closure.execute(argument);
}
}
}
/** Calls {@link #waitForVerify(Integer, Object)} with <code>null</code>
* for {@code maxWait_ms}
*/
static public void waitForVerify(Object easyMockProxy)
throws Throwable
{
waitForVerify(null, easyMockProxy);
}
/** Repeatedly calls {@link EasyMock#verify(Object[])} until it succeeds, or a
* max wait time has elapsed.
* @param maxWait_ms Max wait time. <code>null</code> defaults to 30s.
* @param easyMockProxy Proxy to call verify on
* @throws Throwable
*/
static public void waitForVerify(Integer maxWait_ms, Object easyMockProxy)
throws Throwable
{
if ( maxWait_ms == null )
maxWait_ms = 30 * 1000;
StopWatch stopWatch = new StopWatch();
stopWatch.start();
for(;;) {
try
{
verify(easyMockProxy);
break;
}
catch (AssertionError e)
{
if ( stopWatch.getTime() > maxWait_ms )
throw e;
Thread.sleep(100);
}
}
}
/** Returns a path to a directory in the temp dir with the name of the given
* class. This is useful for temporary test files.
* @param aClass test class for which to create dir
* @return the path
*/
static public String getTestDirPathForTestClass(Object object)
{
String filename = object instanceof Class ?
((Class)object).getName() :
object.getClass().getName();
return DRFileUtils.getTempDir() + File.separator +
filename;
}
static public byte[] createRandomByteArray(int bytesLength)
{
byte[] sourceBytes = new byte[bytesLength];
random.nextBytes(sourceBytes);
return sourceBytes;
}
/** Returns <code>true</code> if the given object is an EasyMock mock object
*/
static public boolean isEasyMockMock(Object object) {
try {
InvocationHandler invocationHandler = Proxy
.getInvocationHandler(object);
return invocationHandler.getClass().getName().contains("easymock");
} catch (IllegalArgumentException e) {
return false;
}
}
}
清单3:
@Test
public void testSomething() {
final AtomicBoolean called = new AtomicBoolean(false);
subject.setCallback(new SomeCallback() {
public void callback(Object arg) {
// check arg here
called.set(true);
}
});
subject.run();
assertTrue(called.get());
}
上周我花了大部分时间在大学图书馆学习并发代码的调试。核心问题是并发代码是不确定的。通常,学术调试可以分为三个阵营之一:
Event-trace/replay. This requires an event monitor and then reviewing the events that were sent. In a UT framework, this would involve manually sending the events as part of a test, and then doing post-mortem reviews. Scriptable. This is where you interact with the running code with a set of triggers. "On x > foo, baz()". This could be interpreted into a UT framework where you have a run-time system triggering a given test on a certain condition. Interactive. This obviously won't work in an automatic testing situation. ;)
现在,正如上面评论者所注意到的,您可以将并发系统设计成更确定的状态。然而,如果你做得不好,你就又回到了设计顺序系统的问题上。
我的建议是,专注于制定一个非常严格的设计协议,规定什么是线程,什么不是线程。如果你限制了你的接口,使元素之间的依赖最小化,那就容易多了。
祝你好运,继续解决这个问题。