Replies: 4 comments
-
Also, do you have any reference that can help me understand the structure of the object |
Beta Was this translation helpful? Give feedback.
-
Hi @cmottac
So if you are new to the Bayesian approach, then I think the best resources would be books (there's a list on the pymc website) or possibly the examples page to get used to checking the MCMC sampling process is good. In short, if you don't get any convergence or divergence warnings then you are mostly likely good. More samples generally means you'll get a more accurate estimation of the true posterior distribution of parameters given the data, but in most cases the default (1000) is acceptable, then ramp that up a bit if accuracy is really crucial. The priors are perhaps a different story. The synthetic control uses the As of now, there's no easy way for users to provide custom priors. Adding this functionality is on our roadmap #387 |
Beta Was this translation helpful? Give feedback.
-
I think it makes sense to calculate the causal impact as a percent change for example. You could plot the causal impact curve but rather an absolute units on the y-axis you could plot % change. Whether it makes sense to average over time, I'm less clear on. If there is an upward trend (for example) then it maybe becomes less clear how to interpret that. But you could also do the % cumulative causal impact for example, and look at the final value in the time series. Just some thoughts. |
Beta Was this translation helpful? Give feedback.
-
So these are |
Beta Was this translation helpful? Give feedback.
-
Hi, I’m currently using the
SyntheticControl
wrapper as outlined in the documentation. However, I’m relatively new topymc
, and while I can replicate the examples successfully, I’m unsure about which parameters are most critical to adjust in order to obtain robust and meaningful results. Are there any specific aspects or best practices I should be particularly mindful of when running these analyses? For instance, are priors important to set and fine tune? Or do I need to play around some number of iterations?In addition, I have a more specific question. I acknowledge that to get the average causal impact and average I can use
az.summary(result.post_impact.mean("obs_ind"))
. In case I want the average value of the counterfactual, is it correct to useaz.summary(result.post_pred.mean("obs_ind"))
? My rationale: if I want an estimate of the relative causal effect that the treatment has on some metric, I think the right way would be to consider the ratio of the causal impact to the counterfactual for that metric.Thank you in advance,
Carlo
Beta Was this translation helpful? Give feedback.
All reactions