Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2D Burgers Equation #31

Open
engsbk opened this issue Mar 5, 2022 · 3 comments
Open

2D Burgers Equation #31

engsbk opened this issue Mar 5, 2022 · 3 comments

Comments

@engsbk
Copy link

engsbk commented Mar 5, 2022

Hello @levimcclenny and thanks for recommending this library!

I have modified the 1D burger example to be in 2D, but I did not get good comparison results. Any suggestions?

import math
import scipy.io
import tensordiffeq as tdq
from tensordiffeq.boundaries import *
from tensordiffeq.models import CollocationSolverND

Domain = DomainND(["x", "y", "t"], time_var='t')

Domain.add("x", [-1.0, 1.0], 256)
Domain.add("y", [-1.0, 1.0], 256)
Domain.add("t", [0.0, 1.0], 100)

N_f = 10000
Domain.generate_collocation_points(N_f)


def func_ic(x,y):
    p =2
    q =1
    return np.sin (p * math.pi * x) * np.sin(q * math.pi * y)
    

init = IC(Domain, [func_ic], var=[['x','y']])
upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")

BCs = [init, upper_x, lower_x, upper_y, lower_y]


def f_model(u_model, x, y, t):
    u = u_model(tf.concat([x, y, t], 1))
    u_x = tf.gradients(u, x)
    u_xx = tf.gradients(u_x, x)
    u_y = tf.gradients(u, y)
    u_yy = tf.gradients(u_y, y)
    u_t = tf.gradients(u, t)
    f_u = u_t + u * (u_x + u_y) - (0.01 / tf.constant(math.pi)) * (u_xx+u_yy)
    return f_u


layer_sizes = [3, 20, 20, 20, 20, 20, 20, 20, 20, 1]

model = CollocationSolverND()
model.compile(layer_sizes, f_model, Domain, BCs)

# to reproduce results from Raissi and the SA-PINNs paper, train for 10k newton and 10k adam
model.fit(tf_iter=10000, newton_iter=10000)

model.save("burger2D_Training_Model")
#model.load("burger2D_Training_Model")

#######################################################
#################### PLOTTING #########################
#######################################################

data = np.load('py-pde_2D_burger_data.npz')

Exact = data['u_output']
Exact_u = np.real(Exact)

x = Domain.domaindict[0]['xlinspace']
y = Domain.domaindict[1]['ylinspace']
t = Domain.domaindict[2]["tlinspace"]

X, Y, T = np.meshgrid(x, y, t)

X_star = np.hstack((X.flatten()[:, None], Y.flatten()[:, None], T.flatten()[:, None]))
u_star = Exact_u.T.flatten()[:, None]

u_pred, f_u_pred = model.predict(X_star)

error_u = tdq.helpers.find_L2_error(u_pred, u_star)
print('Error u: %e' % (error_u))

lb = np.array([-1.0, -1.0, 0.0])
ub = np.array([1.0, 1.0, 1])

tdq.plotting.plot_solution_domain2D(model, [x, y, t], ub=ub, lb=lb, Exact_u=Exact_u.T)

Screen Shot 2022-03-04 at 11 15 31 PM

Screen Shot 2022-03-04 at 11 15 44 PM

Screen Shot 2022-03-04 at 11 15 18 PM

@levimcclenny
Copy link
Member

@engsbk I may try training for a little longer and adding the SA weighting scheme. 10k isn't too long of a training horizon, especially for a 2D problem. I would bump more Adam iterations and see where that takes you.

@levimcclenny
Copy link
Member

@engsbk Did you train this one for longer? I really think that with more training iterations you'll get pretty good results here.

@engsbk
Copy link
Author

engsbk commented Mar 8, 2022

Yes, I increased Adam epochs to 40,000. The results got better only on the first slice of time.
Screen Shot 2022-03-08 at 6 50 18 AM
Screen Shot 2022-03-08 at 6 50 32 AM

I was also trying to apply SA here, but when turning isAdaptive to true and using

dict_adaptive = {"residual": [True],
                 "BCs": [True, False, False, False, False]}
init_weights = {"residual": [tf.random.uniform([N_f, 1])],
                "BCs": [100 * tf.random.uniform([1,1,1,1,1]) , None]}

The loss in Adam training kept increasing which eventually showed loss = nan and I could not plot.
Any ideas?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants