Data Science Asked on July 9, 2021
I am getting the below error message:
Traceback (most recent call last):
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 122, in <module>
agent = Agent(lr=0.001, input_dims=env.observation_space.shape, n_actions=env.action_space.n)
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 57, in __init__
self.Q = LinearDQN(self.lr, self.n_actions, self.input_dims)
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 22, in __init__
self.fc1 = T.flatten(nn.Linear(*input_dims, 128))
TypeError: flatten(): argument 'input' (position 1) must be Tensor, not Linear
runfile('C:/Users/Sam/Desktop/Bitcoin/Q_Learning/DQN_NEW_Original.py', wdir='C:/Users/Sam/Desktop/Bitcoin/Q_Learning')
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 37
actions = self.fc2(layer1)
^
SyntaxError: invalid syntax
runfile('C:/Users/Sam/Desktop/Bitcoin/Q_Learning/DQN_NEW_Original.py', wdir='C:/Users/Sam/Desktop/Bitcoin/Q_Learning')
Traceback (most recent call last):
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 133, in <module>
agent.learn(obs, action, reward, obs_)
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 80, in learn
q_pred = self.Q.forward(states)[actions]
File "C:UsersSamDesktopBitcoinQ_LearningDQN_NEW_Original.py", line 36, in forward
layer1 = F.relu(T.flatten(self.fc1(state)))
File "C:UsersSamanaconda3envstensorflow2libsite-packagestorchnnmodulesmodule.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:UsersSamanaconda3envstensorflow2libsite-packagestorchnnmoduleslinear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "C:UsersSamanaconda3envstensorflow2libsite-packagestorchnnfunctional.py", line 1610, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [30 x 2], m2: [30 x 2] at C:/w/b/windows/pytorch/aten/srcTHC/generic/THCTensorMathBlas.cu:283
The error is something to do with the size of my Tensors – I have tried permuting the input data but that returns a new error.
Also worth noting that when the error m1: [30 x 2] increases by the factor of my batch size, when I increase batch size above 1.
super(LinearDQN, self).__init__()
self.fc1 = nn.Linear(*input_dims, 128)
self.fc2 = nn.Linear(128, n_actions)
The input dims are [30,2] and n_actions is 2.
Also note that the error ‘m1: [30 x 2]’ increases in multiples of my batch size (no idea why).
Cheers all,
There are multiple errors in the messages you showed. Maybe you tried running your code multiple times, commenting out or changing the lines causing errors to see if it worked? Anyway, here are my guesses:
TypeError: flatten(): argument 'input' (position 1) must be Tensor, not Linear
: this is because at line 22 of file DQN_NEW_Original.py
you typed self.fc1 = T.flatten(nn.Linear(*input_dims, 128))
and flatten
is just an operation, not a Pytorch module, which means that it receives tensors as parameters, not blocks (which is what Linear
is). flatten
is used in the forward
method, not in the constructor.
SyntaxError: invalid syntax
. Without more context, I am not sure what the cause for this might be.
RuntimeError: size mismatch, m1: [30 x 2], m2: [30 x 2]...
: this is because you declared fc1
as self.fc1 = nn.Linear(*input_dims, 128)
and input_dims
being [30, 2]
, which would lead to the invocation nn.Linear(30, 2, 128)
. However, if you take a look of Linear
's documentation, you will see that it only receives 2 arguments (not 3, like you used): input features and output features. Input features should be the last dimension of the input tensor.
Answered by noe on July 9, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP