C与Python / numpy的数学表现不佳

近似重复/​​相关:

  • BLAS如何获得如此极端的性能? (如果你想在C语言中快速使用matmul,那么除非你想亲自调整自己的asm版本,否则请认真使用一个好的BLAS库。)但这并不意味着看到编译欠优化矩阵代码时会发生什么并不重要。
  • 如何优化矩阵乘法(matmul)代码,以便在单个处理器内核上快速运行
  • 矩阵乘法与块

出于兴趣,我决定比较(不熟练的)手写C与Python / numpy的性能,执行两个大的方形矩阵的简单矩阵乘法,填充从0到1的随机数。

我发现python / numpy超过我的C代码超过10,000x这显然是不对的,所以我的C代码导致它执行得如此糟糕? (甚至用-O3或-Ofast编译)

python:

import time import numpy as np t0 = time.time() m1 = np.random.rand(2000, 2000) m2 = np.random.rand(2000, 2000) t1 = time.time() m3 = m1 @ m2 t2 = time.time() print('creation time: ', t1 - t0, ' \n multiplication time: ', t2 - t1) 

C:

 #include  #include  #include  int main(void) { clock_t t0=clock(), t1, t2; // create matrices and allocate memory int m_size = 2000; int i, j, k; double running_sum; double *m1[m_size], *m2[m_size], *m3[m_size]; double f_rand_max = (double)RAND_MAX; for(i = 0; i < m_size; i++) { m1[i] = (double *)malloc(sizeof(double)*m_size); m2[i] = (double *)malloc(sizeof(double)*m_size); m3[i] = (double *)malloc(sizeof(double)*m_size); } // populate with random numbers 0 - 1 for (i=0; i < m_size; i++) for (j=0; j < m_size; j++) { m1[i][j] = (double)rand() / f_rand_max; m2[i][j] = (double)rand() / f_rand_max; } t1 = clock(); // multiply together for (i=0; i < m_size; i++) for (j=0; j < m_size; j++) { running_sum = 0; for (k = 0; k < m_size; k++) running_sum += m1[i][k] * m2[k][j]; m3[i][j] = running_sum; } t2 = clock(); float t01 = ((float)(t1 - t0) / CLOCKS_PER_SEC ); float t12 = ((float)(t2 - t1) / CLOCKS_PER_SEC ); printf("creation time: %f", t01 ); printf("\nmultiplication time: %f", t12 ); return 0; } 

编辑:已经纠正了python做了一个正确的点产品,它缩小了一点间隙,C到时间的分辨率为微秒,并使用可比较的双数据类型,而不是原始发布的浮点数。

输出:

 $ gcc -O3 -march=native bench.c $ ./a.out creation time: 0.092651 multiplication time: 139.945068 $ python3 bench.py creation time: 0.1473407745361328 multiplication time: 0.329038143157959 

已经指出,这里用C实现的朴素算法可以通过改进编译器优化和缓存的方式得到改进。

编辑:修改C代码以转置第二个矩阵以实现更有效的访问模式,差距更大

修改后的乘法代码:

 // transpose m2 in order to capitalise on cache efficiencies // store transposed matrix in m3 for now for (i=0; i < m_size; i++) for (j=0; j < m_size; j++) m3[j][i] = m2[i][j]; // swap the pointers void *mtemp = *m3; *m3 = *m2; *m2 = mtemp; // multiply together for (i=0; i < m_size; i++) for (j=0; j < m_size; j++) { running_sum = 0; for (k = 0; k < m_size; k++) running_sum += m1[i][k] * m2[j][k]; m3[i][j] = running_sum; } 

结果:

 $ gcc -O3 -march=native bench2.c $ ./a.out creation time: 0.107767 multiplication time: 10.843431 $ python3 bench.py creation time: 0.1488208770751953 multiplication time: 0.3335080146789551 

编辑:使用-0fast进行编译,我确信这是一个公平的比较,将差异降低到一个数量级(有利于numpy)。

 $ gcc -Ofast -march=native bench2.c $ ./a.out creation time: 0.098201 multiplication time: 4.766985 $ python3 bench.py creation time: 0.13812589645385742 multiplication time: 0.3441300392150879 

编辑:有人建议将索引从arr [i] [j]更改为arr [i * m_size + j],这导致性能提升很小:

 for m_size = 10000 $ gcc -Ofast -march=native bench3.c # indexed by arr[ i * m_size + j ] $ ./a.out creation time: 1.280863 multiplication time: 626.327820 $ gcc -Ofast -march=native bench2.c # indexed by art[I][j] $ ./a.out creation time: 2.410230 multiplication time: 708.979980 $ python3 bench.py creation time: 3.8284950256347656 multiplication time: 39.06089973449707 

最新代码bench3.c:

 #include  #include  #include  int main(void) { clock_t t0, t1, t2; t0 = clock(); // create matrices and allocate memory int m_size = 10000; int i, j, k, x, y; double running_sum; double *m1 = (double *)malloc(sizeof(double)*m_size*m_size), *m2 = (double *)malloc(sizeof(double)*m_size*m_size), *m3 = (double *)malloc(sizeof(double)*m_size*m_size); double f_rand_max = (double)RAND_MAX; // populate with random numbers 0 - 1 for (i=0; i < m_size; i++) { x = i * m_size; for (j=0; j < m_size; j++) m1[x + j] = ((double)rand()) / f_rand_max; m2[x + j] = ((double)rand()) / f_rand_max; m3[x + j] = ((double)rand()) / f_rand_max; } t1 = clock(); // transpose m2 in order to capitalise on cache efficiencies // store transposed matrix in m3 for now for (i=0; i < m_size; i++) for (j=0; j < m_size; j++) m3[j*m_size + i] = m2[i * m_size + j]; // swap the pointers double *mtemp = m3; m3 = m2; m2 = mtemp; // multiply together for (i=0; i < m_size; i++) { x = i * m_size; for (j=0; j < m_size; j++) { running_sum = 0; y = j * m_size; for (k = 0; k < m_size; k++) running_sum += m1[x + k] * m2[y + k]; m3[x + j] = running_sum; } } t2 = clock(); float t01 = ((float)(t1 - t0) / CLOCKS_PER_SEC ); float t12 = ((float)(t2 - t1) / CLOCKS_PER_SEC ); printf("creation time: %f", t01 ); printf("\nmultiplication time: %f", t12 ); return 0; } 

结论:因此x10,000差异的原始荒谬因素很大程度上是由于错误地将Python / numpy中的元素乘法与C代码进行比较,而不是使用所有可用的优化进行编译,并且使用非常低效的内存访问模式编写可能没有不使用缓存。 “公平”比较(即正确但效率极低的单线程算法,使用-Ofast编译)会产生x350的性能因素差异。一些简单的编辑可以改善内存访问模式,使得比较降低到x16倍(在numpy的支持下)用于大矩阵(10000 x 10000)乘法。 此外,numpy自动利用我机器上的所有四个虚拟核心,而这个C不会,因此性能差异可能是x4 – x8的因素(取决于该程序在超线程上的运行情况)。 考虑到我真的不知道自己在做什么,并且只是将一些代码拼凑在一起,我认为x4 – x8的因素是相当明智的,而numpy是基于BLAS,我理解这些已经多年来得到了广泛的优化来自各地的专家,所以我认为问题得到了解答/解决。