Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce computation time massively in large het_map objects #1024

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

Bartdoekemeijer
Copy link
Collaborator

Reduce computation time for large het_map objects

Currently, a new LinearNDInterpolant is prepared for each findex in a FLORIS timeseries evaluation with heterogeneous_map. The preparation of a LinearNDInterpolant required for the heterogeneous map is computationally intensive (especially when the het_map is defined for many coordinates) due to the Delaunay triangulation. However, this triangulation is identical between each findex, and therefore it makes sense to recycle this information rather than to recalculate it for each findex.

Related issue

I haven't made a separate issue for this. I figured I'd open a PR directly.

Impacted areas of the software

The flow_field.py module.

Additional supporting information

In my usage, it was taking about 45 seconds to load the heterogeneous map interpolants. This is really wasted time and was brought down to 0.4 seconds with this PR by recycling the object as conserving as much information as possible between the findices.

Test results, if applicable

Here's a test script to benchmark this functionality:

import numpy as np
import pandas as pd
from time import perf_counter as timerpc

from floris import (
    FlorisModel,
    TimeSeries,
    HeterogeneousMap
)


if __name__ == "__main__":
    # Create big grid of wind conditions and wind speeds for which we assume to have evaluated het_map
    wd_grid, ws_grid = np.meshgrid(
        np.arange(0.0, 360.0, 3.0),
        np.arange(0.5, 30.51, 1.0)
    )
    df = pd.DataFrame({"wd": wd_grid.flatten(), "ws": ws_grid.flatten()})
    print(f"We have {df.shape[0]} findices.")

    # Create a grid of sensors throughout the farm in x, y, and z
    xg, yg, zg = np.meshgrid(
        np.linspace(-3000.0, 3000.0, 11),
        np.linspace(-3000.0, 3000.0, 11),
        np.arange(0.0, 350.01, 25.0),
    )
    xg = xg.flatten()
    yg = yg.flatten()
    zg = zg.flatten()
    speedups = np.ones((df.shape[0], len(xg)))
    print(f"We have {len(xg)} number of coordinates with het_map information.")

    # Now create FLORIS and a timeseries object with het_map information
    fmodel = FlorisModel("inputs/gch.yaml")
    fmodel.set(wind_shear=0.0)  # Required when working with 3D het_map objects
    het_map = HeterogeneousMap(
        x=xg,
        y=yg,
        z=zg,
        speed_multipliers=speedups,
        wind_directions=wd_grid.flatten(),
        wind_speeds=ws_grid.flatten(),
    )

    print(f"Preparing a timeseries object for 360 findex conditions.")
    ts = TimeSeries(
        wind_directions=np.arange(0.0, 360.0, 1.0),
        wind_speeds=120.0 * np.ones(360),
        turbulence_intensities=0.06 * np.ones(360),
        heterogeneous_map=het_map,
    )
    t0 = timerpc()
    fmodel.set(wind_data=ts)
    print(f"Time spent in 'fmodel.set': {timerpc() - t0:.2f} s")

With the new PR, this takes 0.4 seconds on my system. With the old code, it takes 25 seconds. If you increase the number of findices, the old code scales the computation time linearly. In the new code, there is pretty no penalty for additional findices.

@paulf81
Copy link
Collaborator

paulf81 commented Nov 15, 2024

hi @Bartdoekemeijer , thank you for this! I made some small formatting changes and will take a deeper dive next week

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants