Transforms的使用
torchvision中的transforms,主要对图片进行一些变换
transforms的结构及用法
ALT+7快速打开结构,可以在结构中看到很多类(class),可以查看用法
transforms.py工具箱,有totensor、resize等工具
拿一些特定格式的图片,经过这个工具之后,输出我们想要的结果
python的用法-->totensor数据类型
通过transforms.ToTensor去解决transforms两个问题:
- transforms该如何使用(python)?
from PIL import Image
from torchvision import transforms
img_path = "dataset/train/ants_image/0013035.jpg"
img = Image.open(img_path)
print(img)
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
print(tensor_img)
报错提示:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "D:\desktop\learn_dl\pytorch_1\transforms.py", line 2, in <module>
from torchvision import transforms
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\__init__.py", line 6, in <module>
from torchvision import datasets, io, models, ops, transforms, utils
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\models\__init__.py", line 17, in <module>
from . import detection, optical_flow, quantization, segmentation, video
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\models\detection\__init__.py", line 1, in <module>
from .faster_rcnn import *
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\models\detection\faster_rcnn.py", line 16, in <module>
from .anchor_utils import AnchorGenerator
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\models\detection\anchor_utils.py", line 10, in <module>
class AnchorGenerator(nn.Module):
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\models\detection\anchor_utils.py", line 63, in AnchorGenerator
device: torch.device = torch.device("cpu"),
D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\models\detection\anchor_utils.py:63: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:84.)
device: torch.device = torch.device("cpu"),
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=768x512 at 0x24005700280>
Traceback (most recent call last):
File "D:\desktop\learn_dl\pytorch_1\transforms.py", line 20, in <module>
tensor_img = tensor_trans(img)
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\transforms\transforms.py", line 137, in __call__
return F.to_tensor(pic)
File "D:\Anaconda_python3.12\envs\py3.10\lib\site-packages\torchvision\transforms\functional.py", line 166, in to_tensor
img = torch.from_numpy(np.array(pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True))
RuntimeError: Numpy is not available
根据报错提示,将numpy由2.0.1降级为1.26.4,conda install numpy=1.26.4
运行成功返回:
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=768x512 at 0x22F968B0280>
tensor([[[0.3137, 0.3137, 0.3137, ..., 0.3176, 0.3098, 0.2980],
[0.3176, 0.3176, 0.3176, ..., 0.3176, 0.3098, 0.2980],
[0.3216, 0.3216, 0.3216, ..., 0.3137, 0.3098, 0.3020],
...,
[0.3412, 0.3412, 0.3373, ..., 0.1725, 0.3725, 0.3529],
[0.3412, 0.3412, 0.3373, ..., 0.3294, 0.3529, 0.3294],
[0.3412, 0.3412, 0.3373, ..., 0.3098, 0.3059, 0.3294]],
[[0.5922, 0.5922, 0.5922, ..., 0.5961, 0.5882, 0.5765],
[0.5961, 0.5961, 0.5961, ..., 0.5961, 0.5882, 0.5765],
[0.6000, 0.6000, 0.6000, ..., 0.5922, 0.5882, 0.5804],
...,
[0.6275, 0.6275, 0.6235, ..., 0.3608, 0.6196, 0.6157],
[0.6275, 0.6275, 0.6235, ..., 0.5765, 0.6275, 0.5961],
[0.6275, 0.6275, 0.6235, ..., 0.6275, 0.6235, 0.6314]],
[[0.9137, 0.9137, 0.9137, ..., 0.9176, 0.9098, 0.8980],
[0.9176, 0.9176, 0.9176, ..., 0.9176, 0.9098, 0.8980],
[0.9216, 0.9216, 0.9216, ..., 0.9137, 0.9098, 0.9020],
...,
[0.9294, 0.9294, 0.9255, ..., 0.5529, 0.9216, 0.8941],
[0.9294, 0.9294, 0.9255, ..., 0.8863, 1.0000, 0.9137],
[0.9294, 0.9294, 0.9255, ..., 0.9490, 0.9804, 0.9137]]])
创建具体的工具,tool = transforms.ToTensor()
使用工具:
输入:
输出:result = tool(input)
- 为什么我们需要Tensor数据类型?
from PIL import Image
from torchvision import transforms
img_path = "dataset/train/ants_image/0013035.jpg"
img = Image.open(img_path)
print(img)
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
print(tensor_img)
看看tensor_img,是tensor的数据类型
看看它有那些参数,受保护的特性中,_backward_hooks 其实是神经网络中的理论,反向传播,需要根据我们的结果利用反向传播对前面的参数进行一个调整
_grad 梯度,_grad_fn 梯度的方法,data 图片的具体数据,device 我们使用的设备是cpu,requires_grad 最常用的参数,后面会用到
tensor这个数据类型就是一个包装了我们反向神经网络所需要的一些理论基础的一些参数
为什么说绝对会使用到这个呢?因为在神经网络中肯定要把它转换为tensor型,然后进行训练,所以肯定会用到transforms
之前使用PIL Image进行了读取,numpy.ndarray该如何读取呢?
最常用的就是OpenCV,pip install opencv-python
import cv2
img_path = "dataset/train/ants_image/0013035.jpg"
cv_img = cv2.imread(img_path)
可以看到是numpy.ndarray
PyTorch中ToTensor考虑了我们常用读图片的两种方式,一个是PIL Image,另一个是用OpenCV读的image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
from PIL import Image
writer = SummaryWriter(log_dir='logs')
img_path = "dataset/train/ants_image/0013035.jpg"
img = Image.open(img_path)
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
writer.add_image("Tensor_img", tensor_img)
writer.close()
在控制台输入tensorboard --logdir=logs
可以看到图片
原始资料地址:
Transforms的使用(一)
Transforms的使用(二)
如有侵权联系删除 仅供学习交流使用