You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, including the forest cover weighting, each iteration takes about 9 sec. I'm sure this thing can go faster. For starters: the way this thing stores my output in .dbf format seems clunky. I end up with tens of thousands of .dbf files that have to be processed again in an R script.
Is there a way to make my own output table and append to it in each iteration instead of making a separate .dbf file in every loop?
The text was updated successfully, but these errors were encountered:
To get a handle on the output issue, you can simplify the script to omit the forest cover weighting. That input file was too big to share here anyway. The rest of process doesn't change at all, only the distribution of values which is fine for now. Let me know if the script comments are not clear.
@Patdi
Right now, including the forest cover weighting, each iteration takes about 9 sec. I'm sure this thing can go faster. For starters: the way this thing stores my output in .dbf format seems clunky. I end up with tens of thousands of .dbf files that have to be processed again in an R script.
Is there a way to make my own output table and append to it in each iteration instead of making a separate .dbf file in every loop?
The text was updated successfully, but these errors were encountered: