You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to achieve that it would be necessary to evaluate not only the neural network
z = net%predict( x )
but also the gradients
dzdx = net%grad_wrt( x ) ( ? )
Such that a loss function can be built as a combination of several losses
For instance:
loss = MSE( z_predicted - z_known ) + some_weight * MSE( dzdx_predicted + z_predicted ) ; where the second term shall represent some physical residual.
This can also be useful in online evaluation of the network when the gradients are required for computing a Hessian matrix for instance.
Here a minimal example with pytorch that I thought about doing with neural fortran:
import numpy as np
import torch
import torch.nn as nn
############################################################################
## Define a model to represent the target solution ( y(x) )
class Model(nn.Module):
def __init__(self,D_in,H,D_out):
super(Model, self).__init__()
self.lin1 = nn.Linear(D_in,H)
self.lin2 = nn.Linear(H,D_out)
def forward(self, x):
x = torch.sigmoid(self.lin1(x)) # sigmoid only for the first activation
x = self.lin2(x)
return x
############################################################################
## Define loss_function from the Ordinary differential equation to solve
def ODE(x,fun):
y = fun(x)
dydx = torch.autograd.grad(y, x,
grad_outputs=y.data.new(y.shape).fill_(1),
create_graph=True, retain_graph=True)[0]
d2ydx2 = torch.autograd.grad(dydx, x,
grad_outputs=dydx.data.new(dydx.shape).fill_(1),
create_graph=True, retain_graph=True)[0]
## Evaluate the ODE
eq = d2ydx2 + torch.tensor([ 2.]) # y'' = - 2
## Impose the known initian condition
bc1 = fun(torch.tensor([-2.])) - torch.tensor([-1.]) # y(x=-2) = -1
bc2 = fun(torch.tensor([ 2.])) - torch.tensor([ 1.]) # y(x= 2) = 1
return torch.mean(eq**2) + bc1**2 + bc2**2
############################################################################
## Define the iterative regression fitting function
def fit(model,loss_func,opt,train_x,MaxEpochs,Tolerance):
RelError = 1.0
epoch = 0
loss_0 = 1.0
while RelError > Tolerance and epoch < MaxEpochs:
opt.zero_grad()
loss = loss_func(train_x, model)
loss.backward()
opt.step()
SqError = loss.item()
RelError = loss.item()/loss_0
if epoch == 0:
loss_0 = loss.item()
if epoch % 100 == 0:
print('epoch {}, loss {}, Rel. loss {}'.format(epoch, SqError,RelError))
epoch += 1
############################################################################
## Define reference grid (we can do it directly as a Tensor object)
x_data = torch.linspace(-2.0,2.0,401,requires_grad=True)
x_data = x_data.view(401,1) # reshaping the tensor
############################################################################
## Instantiate the model
model = Model(1,10,1) # 1 input dimension, 10 hidden nodes , 1 output dimension
# Instantiate the minimizable or loss function
loss_func = ODE
# Define the optimization algorithm
from torch import optim
opt = optim.Adam(model.parameters(),lr=0.1,amsgrad=True)
# Run regression
fit(model,loss_func,opt,x_data,MaxEpochs=1000,Tolerance=1e-4)
y_data = model(x_data)
############################################################################
## plot the results
import matplotlib.pyplot as plt
plt.plot(x_data.data.numpy(), -x_data.data.numpy()**2+0.5*x_data.data.numpy()+4., label='exact')
plt.plot(x_data.data.numpy(), y_data.data.numpy(), label='approx',linestyle='dashed')
plt.legend()
plt.show()
I saw that the layers backward implementation have a gradient input, maybe this could be recycled with an intent(inout) ? then maybe an implementation of a gradient function in the same scope of the network backward implementation but looping incrementally on the layer s could aide at building up this "autograd" like operator ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I was looking at the implementation of neural fortran and wanted to try doing PINNs (https://www.sciencedirect.com/science/article/pii/S0021999118307125 https://towardsdatascience.com/physics-informed-neural-networks-pinns-an-intuitive-guide-fff138069563)
In order to achieve that it would be necessary to evaluate not only the neural network
z = net%predict( x )
but also the gradients
dzdx = net%grad_wrt( x ) ( ? )
Such that a loss function can be built as a combination of several losses
For instance:
loss = MSE( z_predicted - z_known ) + some_weight * MSE( dzdx_predicted + z_predicted ) ; where the second term shall represent some physical residual.
This can also be useful in online evaluation of the network when the gradients are required for computing a Hessian matrix for instance.
Here a minimal example with pytorch that I thought about doing with neural fortran:
I saw that the layers backward implementation have a gradient input, maybe this could be recycled with an intent(inout) ? then maybe an implementation of a gradient function in the same scope of the network backward implementation but looping incrementally on the layer s could aide at building up this "autograd" like operator ?
Beta Was this translation helpful? Give feedback.
All reactions