-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: better mesh support with weighted averaging of random field values #249
Comments
Hi Michael-P-Crisp, depending on the application you have in mind, you could have a look at the already implemented course graining procedure. For the application it is intended for, it is the mathematically "correct" way of doing upscaling. Did you check with a high resolution reference field and a low resolution field, if your solution gives accurate estimates of the reference, when applied to the low resolution field? I think my approach would be to subdivide the given mesh and then to calculate the srf on that. We have a good binding to PyVista, which has an example of how to apply such a subdivision. Of course this approach would be much more computationally demanding. Maybe it would help me to understand your problem better, if you could share a bit of background on your problem/ application? |
Hi LSchueler, Thank you for the quick reply. I'm working on plastic deformation of solids under loading, in a Finite Element Analysis program. I'm hesitant to increase the mesh resolution since this can impact the computational solver time. This is for civil engineering work. I've done a comparison between values generated at the centroids (with coarse_graining and appropriate areas of the mesh elements), and an equivalent mesh field derived from averaging the point statistics within each element (from a 0.1 m grid, much smaller than the element sizes). The correlation length is 3 m in both directions using an exponential model (no nugget). Lognormal distribution (I took the log of the values here for better plotting). A comparison of the two cases is given in the below image. The mean and standard deviation of the two is very similar, but you can see from the plot that the mesh with the grid averaging has an overall smoother appearance with more continuous values across elements compared to the centroid values. A discrepancy of course is that the mesh uses irregular triangles while the coarse_graining assumes regular squares. However I'm wondering if the exponential model could be the reason for the slight difference? The page you linked to indicates that it is for a gaussian covariance function. I've noticed the field's variance when using point_volumes is higher than expected compared to the below plot (variance reduction as a proportion of element length/correlation length). The dashed line is a gaussian covariance, and the solid line is an exponential covariance. Using a correlation of 0.4m, I'm getting a reduction of 80% and 50% when using a point_volume of 0.4^2 and 0.8^2 respectively. That's much closer to, but still higher than, the gaussian covariance curve (70% and 45%). I've taken the image from the book "Risk assessment in geotechnical engineering", full reference: |
Hi, I'm encountering issues related to random field mesh element values being evaluated exclusively at the centroid, where this value isn't representative for larger elements.
I've implemented a solution in my script like the one below: srf calculates the values at the nodes, and srf_centroid calculates it at the centroid. The result is then a weighted average (50% centroid, 50% element nodes). The problem is it's not very generalised for different distributions, for example a lognormal random field should use the geometric average rather than arithmetic average.
X1 = srf(seed=seed, store=False)[self.connectivity] # get field values at nodes
X2 = np.atleast_2d(srf_centroid(seed=seed, store=False)).T # get field values at centroids
X = np.hstack((X1, X2*X1.shape[1])).sum(1)/(X1.shape[1]*2) # get weighted average, with weights depending on the number of element nodes
I was wondering if something like this could be implemented in a more robust and generalised way in the package itself? It's fairly efficient in that a lot of elements share nodes. I imagine there wouldn't be much need for a point_volume input since there's already some variance reduction through local averaging occuring?
The text was updated successfully, but these errors were encountered: