How to interpret the trained model? #1382
Replies: 1 comment
-
Hi @landkwon94, what exactly do you mean with 'feature importance' or 'critical time period'? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello Sir!
First of all, thank you so much for sharing your amazing codes!
I am using NeuralProphet model for train/validate/test for time-series forecasting.
After training the model, I want to visualize 'feature importance' or 'critical time period'.
As the original paper title is 'NeuralProphet: Explainable Forecasting at Scale', I hope there are some modules for explaining the model forecasting.
I want to know whether it is possible, and how can it be performed!
I will wait for your reply :)
Many thanks for your contributions :)
Beta Was this translation helpful? Give feedback.
All reactions