Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the ABL precursor sampling #73

Open
Kumoi-S opened this issue Mar 23, 2024 · 1 comment
Open

Question about the ABL precursor sampling #73

Kumoi-S opened this issue Mar 23, 2024 · 1 comment

Comments

@Kumoi-S
Copy link

Kumoi-S commented Mar 23, 2024

Hi,
I've been working with the ABL flat terrain precursor using this example case and almost everything went well. I got the results for the final timestep and reconstructPar correctely, but I'm confused with some error messages every few timesteps in the solver's log:

functionObjects::Q Q writing field: Q
functionObjects::vorticity vorticity writing field: vorticity
--> FOAM Warning :
From function Foam::label Foam::sampledSurfaces::classifyFields()
in file sampledSurface/sampledSurfaces/sampledSurfacesGrouping.C at line 75
Cannot find registered field matching UMean
--> FOAM Warning :
From function Foam::label Foam::sampledSurfaces::classifyFields()
in file sampledSurface/sampledSurfaces/sampledSurfacesGrouping.C at line 75
Cannot find registered field matching UPrime2Mean
--> FOAM Warning :
From function Foam::label Foam::sampledSurfaces::classifyFields()
in file sampledSurface/sampledSurfaces/sampledSurfacesGrouping.C at line 75
Cannot find registered field matching TPrimeUPrimeMean

I suspect the issue is caused by system/sampling/slicesPrecursor, which requests sampling for UMean, UPrime2Mean, TPrimeUPrimeMean. However, it seems the solver wasn't generating these fields.

My first question is: I'm not sure about whether the error message is caused by my compiling or case setting, or is it a known issue that I can safely ignore?

Also, at the end of the solver's log, there's MPI errors. I run the precursor simulation twice(3×3km, 20m grid, neutral | 4×3km, 10m grid, neutral) and get different errors:

=============the 3×3km case ===============
[32167d1f2726:1847] *** An error occurred in MPI_Group_free
[32167d1f2726:1847] *** reported by process [2330132481,30]
[32167d1f2726:1847] *** on communicator MPI_COMM_WORLD
[32167d1f2726:1847] *** MPI_ERR_GROUP: invalid group
[32167d1f2726:1847] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[32167d1f2726:1847] *** and potentially your MPI job)
[32167d1f2726:01812] 30 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[32167d1f2726:01812] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

==============the 4×3km case===============
[32167d1f2726:04387] *** Process received signal ***
[32167d1f2726:04387] Signal: Bus error (7)
[32167d1f2726:04387] Signal code: (128)
[32167d1f2726:04387] Failing at address: (nil)
[32167d1f2726:04387] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980)[0x7ff8af798980]
[32167d1f2726:04387] [ 1] /usr/lib/x86_64-linux-gnu/libmpi.so.20(ompi_group_free+0x16)[0x7ff8ace3c686]
[32167d1f2726:04387] [ 2] /usr/lib/x86_64-linux-gnu/libmpi.so.20(PMPI_Group_free+0x57)[0x7ff8ace645c7]
[32167d1f2726:04387] [ 3] /foam/openfast/install/lib/libopenfastcpplib.so(_ZN4fast8OpenFAST3endEv+0x9a)[0x7ff8b36ef58a]
[32167d1f2726:04387] [ 4] superDeliciousVanilla(+0x49d33)[0x5624a0249d33]
[32167d1f2726:04387] [ 5] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7ff8af3b6c87]
[32167d1f2726:04387] [ 6] superDeliciousVanilla(+0x4d11a)[0x5624a024d11a]
[32167d1f2726:04387] *** End of error message ***
[32167d1f2726:4374] *** An error occurred in MPI_Group_free
[32167d1f2726:4374] *** reported by process [2633498625,2]
[32167d1f2726:4374] *** on communicator MPI_COMM_WORLD
[32167d1f2726:4374] *** MPI_ERR_GROUP: invalid group
[32167d1f2726:4374] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[32167d1f2726:4374] *** and potentially your MPI job)
[32167d1f2726:04388] *** Process received signal ***
[32167d1f2726:04388] Signal: Bus error (7)
[32167d1f2726:04388] Signal code: (128)
[32167d1f2726:04388] Failing at address: (nil)
[32167d1f2726:04388] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980)[0x7fe968ee7980]
[32167d1f2726:04388] [ 1] /usr/lib/x86_64-linux-gnu/libmpi.so.20(ompi_group_free+0x16)[0x7fe96658b686]
[32167d1f2726:04388] [ 2] /usr/lib/x86_64-linux-gnu/libmpi.so.20(PMPI_Group_free+0x57)[0x7fe9665b35c7]
[32167d1f2726:04388] [ 3] /foam/openfast/install/lib/libopenfastcpplib.so(_ZN4fast8OpenFAST3endEv+0x9a)[0x7fe96ce3e58a]
[32167d1f2726:04388] [ 4] superDeliciousVanilla(+0x49d33)[0x55ac92449d33]
[32167d1f2726:04388] [ 5] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7fe968b05c87]
[32167d1f2726:04388] [ 6] superDeliciousVanilla(+0x4d11a)[0x55ac9244d11a]
[32167d1f2726:04388] *** End of error message ***
[32167d1f2726:04367] 25 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[32167d1f2726:04367] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

I notice in the error there's "libopenfastcpplib.so". I compiled openfast with special version mentioned in #51 , which is @a0d4f7e. It seems that the function reported an error is void fast::OpenFAST::end() , and I have no idea why this happened and not sure whether it will affect the final simulation result and subsequent simulation.

The log file with 3×3km simulation is log.0.superDeliciousVanilla.startAt0.ABL_1.txt. I delete some log in the middle because the whole file is too large and reach the limit.

Appreciate any insights you can provide.
Thanks a bunch!

@rthedin
Copy link
Collaborator

rthedin commented Apr 4, 2024

Just FYI, the example link you gave is not part of the SOWFA-6 repository.

The first warning (not error) is saying that those fields don't exist, and thus, sampling cannot be done. You have to enable the averages field so they exist and you are able to sample from them. Here is an example.

Regarding the second issue, it might be a compilation or linking problem with the openfast library. I don't have the time to debug it right now, but if that is happening at the very end of the simulation, you should be okay. Check the openfast output files if you are getting results for the whole simulation time and it is only crashing at the closing of the coupling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants