我想知道在我的c++程序中某个函数在Linux上执行需要多少时间。之后,我想做一个速度比较。我看到了几个时间函数,但最终从boost。空间:

process_user_cpu_clock, captures user-CPU time spent by the current process

现在,我不清楚如果我使用上述函数,我会得到哪个CPU在该函数上花费的唯一时间吗?

其次,我找不到任何使用上述功能的例子。请问有谁能告诉我如何使用上面的功能?

p.s.:现在,我使用std::chrono::system_clock::now()以秒为单位获得时间,但这给了我不同的结果,因为不同的CPU负载每次。


当前回答

因为没有一个提供的答案是非常准确的或可复制的结果,我决定添加一个链接到我的代码,具有亚纳秒的精度和科学统计。

Note that this will only work to measure code that takes a (very) short time to run (aka, a few clock cycles to a few thousand): if they run so long that they are likely to be interrupted by some -heh- interrupt, then it is clearly not possible to give a reproducable and accurate result; the consequence of which is that the measurement never finishes: namely, it continues to measure until it is statistically 99.9% sure it has the right answer which never happens on a machine that has other processes running when the code takes too long.

https://github.com/CarloWood/cwds/blob/master/benchmark.h#L40

其他回答

在c++ 11中,这是一个非常容易使用的方法。你必须使用std::chrono::high_resolution_clock from <chrono>头。

像这样使用它:

#include <chrono>

/* Only needed for the sake of this example. */
#include <iostream>
#include <thread>
    
void long_operation()
{
    /* Simulating a long, heavy operation. */

    using namespace std::chrono_literals;
    std::this_thread::sleep_for(150ms);
}

int main()
{
    using std::chrono::high_resolution_clock;
    using std::chrono::duration_cast;
    using std::chrono::duration;
    using std::chrono::milliseconds;

    auto t1 = high_resolution_clock::now();
    long_operation();
    auto t2 = high_resolution_clock::now();

    /* Getting number of milliseconds as an integer. */
    auto ms_int = duration_cast<milliseconds>(t2 - t1);

    /* Getting number of milliseconds as a double. */
    duration<double, std::milli> ms_double = t2 - t1;

    std::cout << ms_int.count() << "ms\n";
    std::cout << ms_double.count() << "ms\n";
    return 0;
}

这将度量函数long_operation的持续时间。

可能的输出:

150ms
150.068ms

工作示例:https://godbolt.org/z/oe5cMd

下面是一个函数,它将测量作为参数传递的任何函数的执行时间:

#include <chrono>
#include <utility>

typedef std::chrono::high_resolution_clock::time_point TimeVar;

#define duration(a) std::chrono::duration_cast<std::chrono::nanoseconds>(a).count()
#define timeNow() std::chrono::high_resolution_clock::now()

template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
    TimeVar t1=timeNow();
    func(std::forward<Args>(args)...);
    return duration(timeNow()-t1);
}

使用示例:

#include <iostream>
#include <algorithm>

typedef std::string String;

//first test function doing something
int countCharInString(String s, char delim){
    int count=0;
    String::size_type pos = s.find_first_of(delim);
    while ((pos = s.find_first_of(delim, pos)) != String::npos){
        count++;pos++;
    }
    return count;
}

//second test function doing the same thing in different way
int countWithAlgorithm(String s, char delim){
    return std::count(s.begin(),s.end(),delim);
}


int main(){
    std::cout<<"norm: "<<funcTime(countCharInString,"precision=10",'=')<<"\n";
    std::cout<<"algo: "<<funcTime(countWithAlgorithm,"precision=10",'=');
    return 0;
}

输出:

norm: 15555
algo: 2976

我建议使用steady_clock,它保证是单调的,不像high_resolution_clock。

#include <iostream>
#include <chrono>

using namespace std;

unsigned int stopwatch()
{
    static auto start_time = chrono::steady_clock::now();

    auto end_time = chrono::steady_clock::now();
    auto delta    = chrono::duration_cast<chrono::microseconds>(end_time - start_time);

    start_time = end_time;

    return delta.count();
}

int main() {
  stopwatch(); //Start stopwatch
  std::cout << "Hello World!\n";
  cout << stopwatch() << endl; //Time to execute last line
  for (int i=0; i<1000000; i++)
      string s = "ASDFAD";
  cout << stopwatch() << endl; //Time to execute for loop
}

输出:

Hello World!
62
163514

因为没有一个提供的答案是非常准确的或可复制的结果,我决定添加一个链接到我的代码,具有亚纳秒的精度和科学统计。

Note that this will only work to measure code that takes a (very) short time to run (aka, a few clock cycles to a few thousand): if they run so long that they are likely to be interrupted by some -heh- interrupt, then it is clearly not possible to give a reproducable and accurate result; the consequence of which is that the measurement never finishes: namely, it continues to measure until it is statistically 99.9% sure it has the right answer which never happens on a machine that has other processes running when the code takes too long.

https://github.com/CarloWood/cwds/blob/master/benchmark.h#L40

c++ 11清理了Jahid的回复:

#include <chrono>
#include <thread>

void long_operation(int ms)
{
    /* Simulating a long, heavy operation. */
    std::this_thread::sleep_for(std::chrono::milliseconds(ms));
}

template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
    std::chrono::high_resolution_clock::time_point t1 = 
        std::chrono::high_resolution_clock::now();
    func(std::forward<Args>(args)...);
    return std::chrono::duration_cast<std::chrono::milliseconds>(
        std::chrono::high_resolution_clock::now()-t1).count();
}

int main()
{
    std::cout<<"expect 150: "<<funcTime(long_operation,150)<<"\n";

    return 0;
}