You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for sharing this great project ❤️ It is imo a life saver for anyone getting started with Audio plugins.
I already tested it on Windows 10 (mingw) and Ubuntu 22.04 and it works on both.
I am currently trying to develop an AI based plugin effect.
My Plugin uses OpenBLAS and although the resulting VST3 works on my machine, but when I try it on other machines it is not scanned/ recognised by the DAW.
I suspect this to be related to multi-threading issues when using OpenBLAS, or my architecture. I also noticed that my inference/ run is invoked even though my plugin is paused/ stopped. Hence I would like to ask if there is a way to handle this; something
something like void deactivate() or some way to bridge the run logic in this case. Does this make any sense?
My second question; what would be a suitable architecture for loading an AI model, invoking it and then cleaning it in the Audio plugin? atm I use this approach:
#include"DistrhoPlugin.hpp"
START_NAMESPACE_DISTRHO
// --------------------------------------------------------------------------------------------------------------------classImGuiPluginDSP : publicPlugin
{
// inits ...
Model* model;
public:/** Plugin class constructor.@n You must set all parameter values to their defaults, matching ParameterRanges::def.*/ImGuiPluginDSP()
: Plugin(kParamCount, 0, 0), // parameters, programs, statesmodel(nullptr)
{
// load model weights ...load_weights();
}
~ImGuiPluginDSP() override
{
// free modelfree_model();
// free weightsfree_weights();
}
protected:// ----------------------------------------------------------------------------------------------------------------// Information
...
// ----------------------------------------------------------------------------------------------------------------// Internal data// ----------------------------------------------------------------------------------------------------------------// Audio/MIDI Processing/** Activate this plugin.*/voidactivate() override
{
// allocate and init modelmodel_init();
}
/** Run/process function for plugins without MIDI input. @note Some parameters might be null if there are no audio inputs or outputs.*/voidrun(constfloat** inputs, float** outputs, uint32_t frames) override
{
// run model inference/ routinemodel_process(model, inputs[0], inputs[1], frames, outputs);
}
// ----------------------------------------------------------------------------------------------------------------// Callbacks (optional)// ----------------------------------------------------------------------------------------------------------------DISTRHO_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR(ImGuiPluginDSP)
};
// --------------------------------------------------------------------------------------------------------------------
Plugin* createPlugin()
{
returnnewImGuiPluginDSP();
}
// --------------------------------------------------------------------------------------------------------------------
END_NAMESPACE_DISTRHO
I should also mention that my model code is in C (not sure if this is relevent).
I am using windows 10 with mingw 13.2.0
I would appreciate any response. Thank you :)
The text was updated successfully, but these errors were encountered:
First of all, thank you for sharing this great project ❤️ It is imo a life saver for anyone getting started with Audio plugins.
I already tested it on Windows 10 (mingw) and Ubuntu 22.04 and it works on both.
I am currently trying to develop an AI based plugin effect.
My Plugin uses OpenBLAS and although the resulting VST3 works on my machine, but when I try it on other machines it is not scanned/ recognised by the DAW.
I suspect this to be related to multi-threading issues when using OpenBLAS, or my architecture. I also noticed that my inference/ run is invoked even though my plugin is paused/ stopped. Hence I would like to ask if there is a way to handle this; something
something like void deactivate() or some way to bridge the run logic in this case. Does this make any sense?
My second question; what would be a suitable architecture for loading an AI model, invoking it and then cleaning it in the Audio plugin? atm I use this approach:
I should also mention that my model code is in C (not sure if this is relevent).
I am using windows 10 with mingw 13.2.0
I would appreciate any response. Thank you :)
The text was updated successfully, but these errors were encountered: