这些函数之间有什么区别?

特遣部队。variable_op_scope(values, name, default_name, initializer=None) 返回用于定义创建变量的操作的上下文管理器。 这个上下文管理器验证给定的值是否来自同一个图,确保该图是默认图,并推入名称作用域和变量作用域。


特遣部队。op_scope(values, name, default_name=None) 返回定义Python操作时使用的上下文管理器。 这个上下文管理器验证给定的值是否来自同一个图,确保该图是默认图,并推入名称作用域。


tf.name_scope(名字) 使用默认图形的graph. name_scope()的包装器。 有关详细信息,请参阅Graph.name_scope()。


特遣部队。variable_scope(name_or_scope, reuse=None, initializer=None) 返回变量scope的上下文。 变量作用域允许创建新变量并共享已创建的变量,同时提供检查,以防止意外创建或共享。有关详细信息,请参见变量作用域如何,在这里我们只提供几个基本示例。

我注意到一些较新的TensorFlow版本与较旧的CUDA和cuDNN版本不兼容。是否存在兼容版本的概述,甚至官方测试的组合列表?我在TensorFlow文档中找不到它。

我一直在使用TensorFlow中矩阵乘法的介绍性示例。

matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)

当我打印乘积时,它显示为一个张量对象:

<tensorflow.python.framework.ops.Tensor object at 0x10470fcd0>

但是我怎么知道产品的价值呢?

下面的方法不起作用:

print product
Tensor("MatMul:0", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32)

我知道图在会话上运行,但是没有任何方法可以检查张量对象的输出而不在会话中运行图吗?

当使用Tensorflow与Python绑定时,如何将一个张量转换为numpy数组?

这是运行脚本检查Tensorflow是否工作时收到的消息:

I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

我注意到它提到了SSE4.2和AVX,

什么是SSE4.2和AVX? 这些SSE4.2和AVX如何提高Tensorflow任务的CPU计算。 如何使用这两个库使Tensorflow编译?

我是TensorFlow的新手。我搞不懂tf的区别。占位符和tf.Variable。在我看来,tf。占位符用于输入数据,tf。变量用于存储数据的状态。这就是我所知道的一切。

谁能给我详细解释一下他们的不同之处吗?特别是,什么时候使用tf。变量和何时使用tf.placeholder?

通过调试信息,我指的是TensorFlow在我的终端中显示的关于加载的库和找到的设备等的信息,而不是Python错误。

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Graphics Device
major: 5 minor: 2 memoryClockRate (GHz) 1.0885
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.83GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Graphics Device, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
...

我安装了最新版本的Python(3.6.4 64位)和最新版本的PyCharm(2017.3.3 64位)。然后我在PyCharm (Numpy, Pandas等)中安装了一些模块,但当我尝试安装Tensorflow时,它没有安装,我得到了错误消息:

无法找到一个满足需求的版本 没有找到匹配的TensorFlow分布。

然后我尝试从命令提示符安装TensorFlow,我得到了同样的错误消息。 不过,我确实成功安装了tflearn。

我也安装了Python 2.7,但我再次得到相同的错误消息。我在谷歌上搜索了这个错误,并尝试了一些建议给其他人的东西,但都不起作用(这包括安装Flask)。

如何安装Tensorflow?谢谢。

我在一个计算资源共享的环境中工作,也就是说,我们有几台服务器机器,每台机器都配备了几个Nvidia Titan X gpu。

For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU.

TensorFlow的问题在于,默认情况下,它在启动时分配了全部可用的GPU内存。即使是一个小型的两层神经网络,我看到所有12 GB的GPU内存都用完了。

有没有一种方法让TensorFlow只分配,比如说,4 GB的GPU内存,如果我们知道这对一个给定的模型来说已经足够了?

我需要找到我安装了哪个版本的TensorFlow。我使用的是Ubuntu 16.04长期支持。