PS: Re naming suggestions: As M_.params is updated too in the 50-123 block, name update_params is also adequate though something like updateQHMparams() would still be more precise . ----- Original Message ----- From: G. Perendia To: List for Dynare developers Sent: Monday, September 21, 2009 10:27 AM Subject: Re: [DynareDev] Proposed specification for estimation functions
Thanks Stephane
Re: 3&4) Sorry for confusion: Overlapping variables refer to the results of Kalman from the previous period sub-sample block such as a(t|t-1) and P(t|t-1) and will not they be used as starting point entries for the next period block estimation. However, I am not sure if then their "meaning" for the next sub-sample may be distorted by separate data filtering (DataPreparation) for that sub-sample.
I am however also not sure if I can take this into C++ DsgeLikelihood as yet in full - there are many unresolved items (and even more not yet properly understood by me, see below) - but I can at least start building bones for its future integration.
E.g. have quite a few more questions:
5) How are we going to calculate and report overall (or sub-sample?) likelihood(s) to the parameter optimising/sampling DsgeLikelihood driving function (eg csminwel, fminunc or MCMC MH) and how to get it to optimise/sample parameters for different sub-samples? Or, is the parameter differentiation parametric/functional and planned to be included in "update_parameters" block?
6) How will the system know which name structure to target ID1 if "ID1 is a n by 1 vector of indices targeting to the (parameter, exogenous variable or endogenous variable) names (in M_.param_names, M_.exo_names and M_.endo_names"
7) If "the considered subsamples for different parameters may be different", then I believe we need to split the data subsamples and the estimation blocks into shorter common sub-samples. I.e., when you say: "xparam1 is a column n by 1 vector holding the current values of the estimated parameters" does "current" mean value at time t such that max(StartPeriod)< t < min(EndPeriod) for the set?
8) Are priors time-dependent, defined for each sub-sample (or the time slot of the parameter? Are they updated on the run, conditional on previous sub-sample or preset?
9) what do nv*_id nv* by 1 vector of integers represent? If shock IDs, then how the nv* shocks/errors can be linked to individual params' periods. I.e. are they time dependent too and if not, what is their role in this time-slot-dependent structure?
10) In the loop you refer to "NP" as number of periods: "LOOP on periods 1 to NP" but in the structure (lowercase) "np" seem to refer to the number of deep parameters- use of same word/literal (despite of the case difference) may be a bit confusing
11) which optimiser is the method of indirect inference in Dynare?
May I also make some suggestions?
re 4) Block of lines 50-123 updates and checks shocks and obs. errors cov kalman matrices Q and H using relevant subsets of the parameters vector xparam1 (but not changing xparam1). I believe a more precise and descriptive name for the block (e.g. updateQH() or updateShocksAndErrorMatrices() or something on those lines) may be less confusing with e.g. generating or updating the whole new set of all parameters.
Also, DataCleanup, DataDetrend or DataPreparation instead of "data_filtering()" I believe may be less confusing with e.g. Kalman filtering.
I hope this helps to clarify rather than confuse the issues even more.
Best regards
George
----- Original Message ----- From: Stéphane Adjemian To: List for Dynare developers Sent: Friday, September 18, 2009 12:09 PM Subject: Re: [DynareDev] Proposed specification for estimation functions
Hi,
I am not sure that it would be a good idea to implement the unexpected breaks stuff in the c++ version of the DsgeLikelihood... Because it's a new feature and we have no experience with it.
I Changed Period1 to StartPeriod and Period2 to EndPeriod, thanks for the suggestion.
In the data filtering step we remove the "deterministic part" of the data (constant and linear trends), so that we do not have to treat this in the kalman filter recursions. This is already done like this in the matlab version of DsgeLikelihood.
I do not understand 3) and 4). What is an overlapping variable.
The parameter update step corresponds to block between lines 50 to 123 in DsgeLikelihood.m
I think that we do have a c version of the sims optimization routine (provided by Tao Zha).
Best, Stéphane.
2009/9/18 G. Perendia george@perendia.orangehome.co.uk
Hi
I would like to start implementing period-dependent likelihood calculation within the new C++ DsgeLikelihood but that will add a bit of time.
1) I suggest that parameters description is defined by Start (or StartPeriod) and End (or EndPeriod) instead of a.. Period1 (beginning of sub-period) a.. Period2 (end of sub-period) respectively because we may have more than 2 periods and using terms Period1 and 2 as delimiters for other periods can bee a bit confusing in the context (as it did confuse me for a start) .
2) What is data_filtering (for period 1) step as opposed to Kalman filter - is it Kalman smoother? We have no working C++ implementation for smoother as yet.
3) Would it be right to say that the new period divided likelihood estimation requires overlapping variables Y(-/+n) for n>0 among the sub-period blocks?
4) What is parameters update? Is it param likelihood, hill climbing optimisation (or MCMC step) for each sub-period block separately (conditional on the previous period block?)? We do not have working optimiser in C++ yet.
4) Would the interim period-blocks' Kalman inputs be a(t|t-1) and P(t|t-1) as well as overlapping variables Y(-/+n) for n>0 from previous block Kalman output?
Best regards
George
_______________________________________________ Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
-- Stéphane Adjemian CEPREMAP & Université du Maine
Tel: (33)1-43-13-62-39
----------------------------------------------------------------------------
_______________________________________________ Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Hi George
PS: Re naming suggestions: As M_.params is updated too in the 50-123 block, name update_params is also adequate though something like updateQHMparams() would still be more precise .
----- Original Message ----- *From:* G. Perendia <mailto:george@perendia.orangehome.co.uk> *To:* List for Dynare developers <mailto:dev@dynare.org> *Sent:* Monday, September 21, 2009 10:27 AM *Subject:* Re: [DynareDev] Proposed specification for estimation functions Thanks Stephane Re: 3&4) Sorry for confusion: Overlapping variables refer to the results of Kalman from the previous period sub-sample block such as a(t|t-1) and P(t|t-1) and will not they be used as starting point entries for the next period block estimation. However, I am not sure if then their "meaning" for the next sub-sample may be distorted by separate data filtering (DataPreparation) for that sub-sample.
As change in parameters is not expected by the agents, the solution is very simple, we simply change the values of the system matrices in the Kalman filter. However, it would be too complicated to write a Kalman filter routine with changing matrices, so we prefer repeat the basic steps of the current implementation of DsgeLikelihood for each sub-sample during which all parameters remain unchanged. The consistency of the transition from one sub-period to the next is insured by the proper initialization of initial state (a0) and initial covariance matrix (P0), that are taken from the last value of the previous sub-sample.
I am however also not sure if I can take this into C++ DsgeLikelihood as yet in full - there are many unresolved items (and even more not yet properly understood by me, see below) - but I can at least start building bones for its future integration. E.g. have quite a few more questions: 5) How are we going to calculate and report overall (or sub-sample?) likelihood(s) to the parameter optimising/sampling DsgeLikelihood driving function (eg csminwel, fminunc or MCMC MH) and how to get it to optimise/sample parameters for different sub-samples? Or, is the parameter differentiation parametric/functional and planned to be included in "update_parameters" block?
The likelihood (posterior) for the entire sample is simply computed as the sum of the likelihood for each sub-sample. Your question however exposes a difficulty that is not discussed on the wiki: from an estimation point of view, changing parameters are treated as different parameter, so in the prior, constant parameters count for one, and each different values of a parameter count each for one as well. The value of log prior is not computed in the loopm but only once at the end of DsgeLikelihood.
Because we compute the likelihood (posterior) for the entire sample, introducing changing parameters doesn't modify the way the optimizer of the MCMC procedure calls DsgeLiklihood
6) How will the system know which name structure to target ID1 if "ID1 is a n by 1 vector of indices targeting to the (parameter, exogenous variable or endogenous variable) names (in M_.param_names, M_.exo_names and M_.endo_names"
from the type field
7) If *"**t*he considered subsamples for different parameters may be different", then I believe we need to split the data subsamples and the estimation blocks into shorter common sub-samples. I.e., when you say: "xparam1 is a column n by 1 vector holding the current values of the estimated parameters" does "current" mean value at time t such that max(StartPeriod)< t < min(EndPeriod) for the set?
The initialization of estimation must detect the set of sub-samples where all parameters remain constant and make a table of which parameter is active is which subsample. This is not well described in the wiki. We need an additional component to the structure describing the parameters
8) Are priors time-dependent, defined for each sub-sample (or the time slot of the parameter? Are they updated on the run, conditional on previous sub-sample or preset?
There is a prior for each sub-sample where the parameter remains constant. These sub-samples can be the union of several elementary sub-sampels defined just above
9) what do nv*_id nv* by 1 vector of integers represent? If shock IDs, then how the nv* shocks/errors can be linked to individual params' periods. I.e. are they time dependent too and if not, what is their role in this time-slot-dependent structure?
They the indices of a same type in all estimated paramters (Stephane, am I right?)
10) In the loop you refer to "NP" as number of periods: "LOOP on periods 1 to NP" but in the structure (lowercase) "np" seem to refer to the number of deep parameters- use of same word/literal (despite of the case difference) may be a bit confusing
Right, this confusing. NP is in fact the number of elementary sub-samples.
11) which optimiser is the method of indirect inference in Dynare?
we don't have indirect inference in Dynare
May I also make some suggestions? re 4) Block of lines 50-123 updates and checks shocks and obs. errors cov kalman matrices Q and H using relevant subsets of the parameters vector xparam1 (but not changing xparam1). I believe a more precise and descriptive name for the block (e.g. updateQH() or updateShocksAndErrorMatrices() or something on those lines) may be less confusing with e.g. generating or updating the whole new set of all parameters.
Sure. For the time being, this names refer only --imperfectly-- to the functionality. They are not meant to be the definitive names of the functions.
Also, DataCleanup, DataDetrend or DataPreparation instead of "data_filtering()" I believe may be less confusing with e.g. Kalman filtering.
Sure, maybe data_prefiltering. This is not cleanup, we remove trends and constant, preparation is sort of vague
I hope this helps to clarify rather than confuse the issues even more.
Many thanks, it helps a lot
Best
Michel
Best regards George ----- Original Message ----- *From:* Stéphane Adjemian <mailto:stephane.adjemian@gmail.com> *To:* List for Dynare developers <mailto:dev@dynare.org> *Sent:* Friday, September 18, 2009 12:09 PM *Subject:* Re: [DynareDev] Proposed specification for estimation functions Hi, I am not sure that it would be a good idea to implement the unexpected breaks stuff in the c++ version of the DsgeLikelihood... Because it's a new feature and we have no experience with it. I Changed Period1 to StartPeriod and Period2 to EndPeriod, thanks for the suggestion. In the data filtering step we remove the "deterministic part" of the data (constant and linear trends), so that we do not have to treat this in the kalman filter recursions. This is already done like this in the matlab version of DsgeLikelihood. I do not understand 3) and 4). What is an overlapping variable. The parameter update step corresponds to block between lines 50 to 123 in DsgeLikelihood.m I think that we do have a c version of the sims optimization routine (provided by Tao Zha). Best, Stéphane. 2009/9/18 G. Perendia <george@perendia.orangehome.co.uk <mailto:george@perendia.orangehome.co.uk>> Hi I would like to start implementing period-dependent likelihood calculation within the new C++ DsgeLikelihood but that will add a bit of time. 1) I suggest that parameters description is defined by Start (or StartPeriod) and End (or EndPeriod) instead of a.. Period1 (beginning of sub-period) a.. Period2 (end of sub-period) respectively because we may have more than 2 periods and using terms Period1 and 2 as delimiters for other periods can bee a bit confusing in the context (as it did confuse me for a start) . 2) What is data_filtering (for period 1) step as opposed to Kalman filter - is it Kalman smoother? We have no working C++ implementation for smoother as yet. 3) Would it be right to say that the new period divided likelihood estimation requires overlapping variables Y(-/+n) for n>0 among the sub-period blocks? 4) What is parameters update? Is it param likelihood, hill climbing optimisation (or MCMC step) for each sub-period block separately (conditional on the previous period block?)? We do not have working optimiser in C++ yet. 4) Would the interim period-blocks' Kalman inputs be a(t|t-1) and P(t|t-1) as well as overlapping variables Y(-/+n) for n>0 from previous block Kalman output? Best regards George _______________________________________________ Dev mailing list Dev@dynare.org <mailto:Dev@dynare.org> http://www.dynare.org/cgi-bin/mailman/listinfo/dev -- Stéphane Adjemian CEPREMAP & Université du Maine Tel: (33)1-43-13-62-39 ------------------------------------------------------------------------ _______________________________________________ Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Thanks Michel
Can we embed some of your explanations into the EstimationModule Wiki page, e.g. by copying?
Re 10) - I realised what NP is but it may still be rather confusing to use same literal for two very different notions. E.g. NP can become NumPeriods or NSS - number of sub-samples since lower case np (number of deep parameters?) is already in (another) use.
Best regards
George
----- Original Message ----- From: "Michel Juillard" michel.juillard@ens.fr To: "List for Dynare developers" dev@dynare.org Sent: Monday, September 21, 2009 2:02 PM Subject: Re: [DynareDev] Proposed specification for estimation functions
Hi George
PS: Re naming suggestions: As M_.params is updated too in the 50-123 block, name update_params is also adequate though something like updateQHMparams() would still be more precise .
----- Original Message ----- *From:* G. Perendia <mailto:george@perendia.orangehome.co.uk> *To:* List for Dynare developers <mailto:dev@dynare.org> *Sent:* Monday, September 21, 2009 10:27 AM *Subject:* Re: [DynareDev] Proposed specification for estimation functions Thanks Stephane Re: 3&4) Sorry for confusion: Overlapping variables refer to the results of Kalman from the previous period sub-sample block such as a(t|t-1) and P(t|t-1) and will not they be used as starting point entries for the next period block estimation. However, I am not sure if then their "meaning" for the next sub-sample may be distorted by separate data filtering (DataPreparation) for that sub-sample.
As change in parameters is not expected by the agents, the solution is very simple, we simply change the values of the system matrices in the Kalman filter. However, it would be too complicated to write a Kalman filter routine with changing matrices, so we prefer repeat the basic steps of the current implementation of DsgeLikelihood for each sub-sample during which all parameters remain unchanged. The consistency of the transition from one sub-period to the next is insured by the proper initialization of initial state (a0) and initial covariance matrix (P0), that are taken from the last value of the previous
sub-sample.
I am however also not sure if I can take this into C++ DsgeLikelihood as yet in full - there are many unresolved items (and even more not yet properly understood by me, see below) - but I can at least start building bones for its future
integration.
E.g. have quite a few more questions: 5) How are we going to calculate and report overall (or sub-sample?) likelihood(s) to the parameter optimising/sampling DsgeLikelihood driving function (eg csminwel, fminunc or MCMC MH) and how to get it to optimise/sample parameters for different sub-samples? Or, is the parameter differentiation parametric/functional and planned to be included in "update_parameters" block?
The likelihood (posterior) for the entire sample is simply computed as the sum of the likelihood for each sub-sample. Your question however exposes a difficulty that is not discussed on the wiki: from an estimation point of view, changing parameters are treated as different parameter, so in the prior, constant parameters count for one, and each different values of a parameter count each for one as well. The value of log prior is not computed in the loopm but only once at the end of DsgeLikelihood.
Because we compute the likelihood (posterior) for the entire sample, introducing changing parameters doesn't modify the way the optimizer of the MCMC procedure calls DsgeLiklihood
6) How will the system know which name structure to target ID1 if "ID1 is a n by 1 vector of indices targeting to the (parameter, exogenous variable or endogenous variable) names (in M_.param_names, M_.exo_names and M_.endo_names"
from the type field
7) If *"**t*he considered subsamples for different parameters may be different", then I believe we need to split the data subsamples and the estimation blocks into shorter common
sub-samples.
I.e., when you say: "xparam1 is a column n by 1 vector holding the current values of the estimated parameters" does "current" mean value at time t such that max(StartPeriod)< t < min(EndPeriod) for the set?
The initialization of estimation must detect the set of sub-samples where all parameters remain constant and make a table of which parameter is active is which subsample. This is not well described in the wiki. We need an additional component to the structure describing the parameters
8) Are priors time-dependent, defined for each sub-sample (or the time slot of the parameter? Are they updated on the run, conditional on previous sub-sample or preset?
There is a prior for each sub-sample where the parameter remains constant. These sub-samples can be the union of several elementary sub-sampels defined just above
9) what do nv*_id nv* by 1 vector of integers represent? If shock IDs, then how the nv* shocks/errors can be linked to individual params' periods. I.e. are they time dependent too and if not, what is their role in this time-slot-dependent structure?
They the indices of a same type in all estimated paramters (Stephane, am I right?)
10) In the loop you refer to "NP" as number of periods: "LOOP on periods 1 to NP" but in the structure (lowercase) "np" seem to refer to the number of deep parameters- use of same word/literal (despite of the case difference) may be a bit confusing
Right, this confusing. NP is in fact the number of elementary sub-samples.
11) which optimiser is the method of indirect inference in Dynare?
we don't have indirect inference in Dynare
May I also make some suggestions? re 4) Block of lines 50-123 updates and checks shocks and obs. errors cov kalman matrices Q and H using relevant subsets of the parameters vector xparam1 (but not changing xparam1). I believe a more precise and descriptive name for the block (e.g. updateQH() or updateShocksAndErrorMatrices() or something on those lines) may be less confusing with e.g. generating or updating the whole new set of all parameters.
Sure. For the time being, this names refer only --imperfectly-- to the functionality. They are not meant to be the definitive names of the functions.
Also, DataCleanup, DataDetrend or DataPreparation instead of "data_filtering()" I believe may be less confusing with e.g. Kalman filtering.
Sure, maybe data_prefiltering. This is not cleanup, we remove trends and constant, preparation is sort of vague
I hope this helps to clarify rather than confuse the issues even
more.
Many thanks, it helps a lot
Best
Michel
Best regards George ----- Original Message ----- *From:* Stéphane Adjemian <mailto:stephane.adjemian@gmail.com> *To:* List for Dynare developers <mailto:dev@dynare.org> *Sent:* Friday, September 18, 2009 12:09 PM *Subject:* Re: [DynareDev] Proposed specification for estimation functions Hi, I am not sure that it would be a good idea to implement the unexpected breaks stuff in the c++ version of the DsgeLikelihood... Because it's a new feature and we have no experience with it. I Changed Period1 to StartPeriod and Period2 to EndPeriod, thanks for the suggestion. In the data filtering step we remove the "deterministic part" of the data (constant and linear trends), so that we do not have to treat this in the kalman filter recursions. This is already done like this in the matlab version of DsgeLikelihood. I do not understand 3) and 4). What is an overlapping variable. The parameter update step corresponds to block between lines 50 to 123 in DsgeLikelihood.m I think that we do have a c version of the sims optimization routine (provided by Tao Zha). Best, Stéphane. 2009/9/18 G. Perendia <george@perendia.orangehome.co.uk <mailto:george@perendia.orangehome.co.uk>> Hi I would like to start implementing period-dependent likelihood calculation within the new C++ DsgeLikelihood but that will add a bit of time. 1) I suggest that parameters description is defined by Start (or StartPeriod) and End (or EndPeriod) instead of a.. Period1 (beginning of sub-period) a.. Period2 (end of sub-period) respectively because we may have more than 2 periods and using terms Period1 and 2 as delimiters for other periods can bee a bit confusing in the context (as it did confuse me for a start) . 2) What is data_filtering (for period 1) step as opposed to Kalman filter - is it Kalman smoother? We have no working C++ implementation for smoother as yet. 3) Would it be right to say that the new period divided likelihood estimation requires overlapping variables Y(-/+n) for n>0 among the sub-period blocks? 4) What is parameters update? Is it param likelihood, hill climbing optimisation (or MCMC step) for each sub-period block separately (conditional on the previous period block?)? We do not have working optimiser in C++ yet. 4) Would the interim period-blocks' Kalman inputs be a(t|t-1) and P(t|t-1) as well as overlapping variables Y(-/+n) for n>0 from previous block Kalman output? Best regards George _______________________________________________ Dev mailing list Dev@dynare.org <mailto:Dev@dynare.org> http://www.dynare.org/cgi-bin/mailman/listinfo/dev -- Stéphane Adjemian CEPREMAP & Université du Maine Tel: (33)1-43-13-62-39
------------------------------------------------------------------------
_______________________________________________ Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
G. Perendia wrote:
Thanks Michel
Can we embed some of your explanations into the EstimationModule Wiki page, e.g. by copying?
Re 10) - I realised what NP is but it may still be rather confusing to use same literal for two very different notions. E.g. NP can become NumPeriods or NSS - number of sub-samples since lower case np (number of deep parameters?) is already in (another) use.
Best regards
George
Yes, please, do that
Best
Michel
----- Original Message ----- From: "Michel Juillard" michel.juillard@ens.fr To: "List for Dynare developers" dev@dynare.org Sent: Monday, September 21, 2009 2:02 PM Subject: Re: [DynareDev] Proposed specification for estimation functions
Hi George
PS: Re naming suggestions: As M_.params is updated too in the 50-123 block, name update_params is also adequate though something like updateQHMparams() would still be more precise .
----- Original Message ----- *From:* G. Perendia <mailto:george@perendia.orangehome.co.uk> *To:* List for Dynare developers <mailto:dev@dynare.org> *Sent:* Monday, September 21, 2009 10:27 AM *Subject:* Re: [DynareDev] Proposed specification for estimation functions Thanks Stephane Re: 3&4) Sorry for confusion: Overlapping variables refer to the results of Kalman from the previous period sub-sample block such as a(t|t-1) and P(t|t-1) and will not they be used as starting point entries for the next period block estimation. However, I am not sure if then their "meaning" for the next sub-sample may be distorted by separate data filtering (DataPreparation) for that sub-sample.
As change in parameters is not expected by the agents, the solution is very simple, we simply change the values of the system matrices in the Kalman filter. However, it would be too complicated to write a Kalman filter routine with changing matrices, so we prefer repeat the basic steps of the current implementation of DsgeLikelihood for each sub-sample during which all parameters remain unchanged. The consistency of the transition from one sub-period to the next is insured by the proper initialization of initial state (a0) and initial covariance matrix (P0), that are taken from the last value of the previous
sub-sample.
I am however also not sure if I can take this into C++ DsgeLikelihood as yet in full - there are many unresolved items (and even more not yet properly understood by me, see below) - but I can at least start building bones for its future
integration.
E.g. have quite a few more questions: 5) How are we going to calculate and report overall (or sub-sample?) likelihood(s) to the parameter optimising/sampling DsgeLikelihood driving function (eg csminwel, fminunc or MCMC MH) and how to get it to optimise/sample parameters for different sub-samples? Or, is the parameter differentiation parametric/functional and planned to be included in "update_parameters" block?
The likelihood (posterior) for the entire sample is simply computed as the sum of the likelihood for each sub-sample. Your question however exposes a difficulty that is not discussed on the wiki: from an estimation point of view, changing parameters are treated as different parameter, so in the prior, constant parameters count for one, and each different values of a parameter count each for one as well. The value of log prior is not computed in the loopm but only once at the end of DsgeLikelihood.
Because we compute the likelihood (posterior) for the entire sample, introducing changing parameters doesn't modify the way the optimizer of the MCMC procedure calls DsgeLiklihood
6) How will the system know which name structure to target ID1 if "ID1 is a n by 1 vector of indices targeting to the (parameter, exogenous variable or endogenous variable) names (in M_.param_names, M_.exo_names and M_.endo_names"
from the type field
7) If *"**t*he considered subsamples for different parameters may be different", then I believe we need to split the data subsamples and the estimation blocks into shorter common
sub-samples.
I.e., when you say: "xparam1 is a column n by 1 vector holding the current values of the estimated parameters" does "current" mean value at time t such that max(StartPeriod)< t < min(EndPeriod) for the set?
The initialization of estimation must detect the set of sub-samples where all parameters remain constant and make a table of which parameter is active is which subsample. This is not well described in the wiki. We need an additional component to the structure describing the parameters
8) Are priors time-dependent, defined for each sub-sample (or the time slot of the parameter? Are they updated on the run, conditional on previous sub-sample or preset?
There is a prior for each sub-sample where the parameter remains constant. These sub-samples can be the union of several elementary sub-sampels defined just above
9) what do nv*_id nv* by 1 vector of integers represent? If shock IDs, then how the nv* shocks/errors can be linked to individual params' periods. I.e. are they time dependent too and if not, what is their role in this time-slot-dependent structure?
They the indices of a same type in all estimated paramters (Stephane, am I right?)
10) In the loop you refer to "NP" as number of periods: "LOOP on periods 1 to NP" but in the structure (lowercase) "np" seem to refer to the number of deep parameters- use of same word/literal (despite of the case difference) may be a bit confusing
Right, this confusing. NP is in fact the number of elementary sub-samples.
11) which optimiser is the method of indirect inference in Dynare?
we don't have indirect inference in Dynare
May I also make some suggestions? re 4) Block of lines 50-123 updates and checks shocks and obs. errors cov kalman matrices Q and H using relevant subsets of the parameters vector xparam1 (but not changing xparam1). I believe a more precise and descriptive name for the block (e.g. updateQH() or updateShocksAndErrorMatrices() or something on those lines) may be less confusing with e.g. generating or updating the whole new set of all parameters.
Sure. For the time being, this names refer only --imperfectly-- to the functionality. They are not meant to be the definitive names of the functions.
Also, DataCleanup, DataDetrend or DataPreparation instead of "data_filtering()" I believe may be less confusing with e.g. Kalman filtering.
Sure, maybe data_prefiltering. This is not cleanup, we remove trends and constant, preparation is sort of vague
I hope this helps to clarify rather than confuse the issues even
more.
Many thanks, it helps a lot
Best
Michel
Best regards George ----- Original Message ----- *From:* Stéphane Adjemian <mailto:stephane.adjemian@gmail.com> *To:* List for Dynare developers <mailto:dev@dynare.org> *Sent:* Friday, September 18, 2009 12:09 PM *Subject:* Re: [DynareDev] Proposed specification for estimation functions Hi, I am not sure that it would be a good idea to implement the unexpected breaks stuff in the c++ version of the DsgeLikelihood... Because it's a new feature and we have no experience with it. I Changed Period1 to StartPeriod and Period2 to EndPeriod, thanks for the suggestion. In the data filtering step we remove the "deterministic part" of the data (constant and linear trends), so that we do not have to treat this in the kalman filter recursions. This is already done like this in the matlab version of DsgeLikelihood. I do not understand 3) and 4). What is an overlapping variable. The parameter update step corresponds to block between lines 50 to 123 in DsgeLikelihood.m I think that we do have a c version of the sims optimization routine (provided by Tao Zha). Best, Stéphane. 2009/9/18 G. Perendia <george@perendia.orangehome.co.uk <mailto:george@perendia.orangehome.co.uk>> Hi I would like to start implementing period-dependent likelihood calculation within the new C++ DsgeLikelihood but that will add a bit of time. 1) I suggest that parameters description is defined by Start (or StartPeriod) and End (or EndPeriod) instead of a.. Period1 (beginning of sub-period) a.. Period2 (end of sub-period) respectively because we may have more than 2 periods and using terms Period1 and 2 as delimiters for other periods can bee a bit confusing in the context (as it did confuse me for a start) . 2) What is data_filtering (for period 1) step as opposed to Kalman filter - is it Kalman smoother? We have no working C++ implementation for smoother as yet. 3) Would it be right to say that the new period divided likelihood estimation requires overlapping variables Y(-/+n) for n>0 among the sub-period blocks? 4) What is parameters update? Is it param likelihood, hill climbing optimisation (or MCMC step) for each sub-period block separately (conditional on the previous period block?)? We do not have working optimiser in C++ yet. 4) Would the interim period-blocks' Kalman inputs be a(t|t-1) and P(t|t-1) as well as overlapping variables Y(-/+n) for n>0 from previous block Kalman output? Best regards George _______________________________________________ Dev mailing list Dev@dynare.org <mailto:Dev@dynare.org> http://www.dynare.org/cgi-bin/mailman/listinfo/dev -- Stéphane Adjemian CEPREMAP & Université du Maine Tel: (33)1-43-13-62-39
_______________________________________________ Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Hi
It seems to me that the following values and vectors described on the (new) EstimationModule wiki page are in a way redundant as it looks like that they can be derived from the parameter_description type vector. If so, is there any other reason for keeping them explicitly as part of the structure except that it is useful for performance improvement and for easier understanding of the structure?
- nvx integer scalar (number of estimated structural shock standard deviations). -. nvx_id nvx by 1 vector of integers. -. nvn integer scalar (number of estimated measurement error standard deviations). -. nvn_id nvn by 1 vector of integers. -. ncx integer scalar (number of estimated structural shock correlations). -. ncx_id ncx by 1 vector of integers. -. ncn integer scalar (number of estimated measurement error correlations). -. ncn_id ncn by 1 vector of integers. -. np integer scalar (number of estimated "deep" parameters). -. np_id np by 1 vector of integers.
Best regards
George
Hi George,
performance improvement was the idea. Lets keep the scalars as they are used to distinguish different types of estimation problems. You can leave aside the vector of indices until we encounter the need for them.
Best
Michel
G. Perendia wrote:
Hi
It seems to me that the following values and vectors described on the (new) EstimationModule wiki page are in a way redundant as it looks like that they can be derived from the parameter_description type vector. If so, is there any other reason for keeping them explicitly as part of the structure except that it is useful for performance improvement and for easier understanding of the structure?
- nvx integer scalar (number of estimated structural shock standard
deviations). -. nvx_id nvx by 1 vector of integers. -. nvn integer scalar (number of estimated measurement error standard deviations). -. nvn_id nvn by 1 vector of integers. -. ncx integer scalar (number of estimated structural shock correlations). -. ncx_id ncx by 1 vector of integers. -. ncn integer scalar (number of estimated measurement error correlations). -. ncn_id ncn by 1 vector of integers. -. np integer scalar (number of estimated "deep" parameters). -. np_id np by 1 vector of integers.
Best regards
George
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Thanks Michel As parameters can have different time slots I think we will need to add two structures and: 1 one to )organise data around individual parameters having a vector of descriptions for each of their time slots, and 2) an additional structure listing all identified longest common periods pointing to each parameter's description for that slot.
Thus, re (1) instead of xparam1 being vector of doubles and e.g. start_period vectors having n entries, xpramss1 will be a vector of vectors of structures, and, re (2) timeslots will be a vector of common periods of unpredetermined size, of vectors of length n containing n indices to the relevant parameter's description for that period including its value and priors.
Best regards
George
----- Original Message ----- From: "Michel Juillard" michel.juillard@ens.fr To: "List for Dynare developers" dev@dynare.org Sent: Thursday, December 10, 2009 10:58 AM Subject: Re: [DynareDev] Proposed specification for estimation functions
Hi George,
performance improvement was the idea. Lets keep the scalars as they are used to distinguish different types of estimation problems. You can leave aside the vector of indices until we encounter the need for them.
Best
Michel
G. Perendia wrote:
Hi
It seems to me that the following values and vectors described on the
(new)
EstimationModule wiki page are in a way redundant as it looks like that
they
can be derived from the parameter_description type vector. If so, is there
any
other reason for keeping them explicitly as part of the structure except that
it
is useful for performance improvement and for easier understanding of the structure?
- nvx integer scalar (number of estimated structural shock standard
deviations). -. nvx_id nvx by 1 vector of integers. -. nvn integer scalar (number of estimated measurement error standard deviations). -. nvn_id nvn by 1 vector of integers. -. ncx integer scalar (number of estimated structural shock
correlations).
-. ncx_id ncx by 1 vector of integers. -. ncn integer scalar (number of estimated measurement error
correlations).
-. ncn_id ncn by 1 vector of integers. -. np integer scalar (number of estimated "deep" parameters). -. np_id np by 1 vector of integers.
Best regards
George
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dear George,
I'm not sure that it needs to be that complicated. The estimation algorithm (optimization or MCMC) see all the parameters for all sub-periods as a single vector.
In updateQHparams(), it is possible to compare the date of the relevant subperiod with StartPeriod and EndPeriod. I don't think that such tests would be very costly.
If I'm wrong and that we need after all the structures that you suggest, we will add them later.
All the best,
Michel
G. Perendia wrote:
Thanks Michel As parameters can have different time slots I think we will need to add two structures and: 1 one to )organise data around individual parameters having a vector of descriptions for each of their time slots, and 2) an additional structure listing all identified longest common periods pointing to each parameter's description for that slot.
Thus, re (1) instead of xparam1 being vector of doubles and e.g. start_period vectors having n entries, xpramss1 will be a vector of vectors of structures, and, re (2) timeslots will be a vector of common periods of unpredetermined size, of vectors of length n containing n indices to the relevant parameter's description for that period including its value and priors.
Best regards
George
----- Original Message ----- From: "Michel Juillard" michel.juillard@ens.fr To: "List for Dynare developers" dev@dynare.org Sent: Thursday, December 10, 2009 10:58 AM Subject: Re: [DynareDev] Proposed specification for estimation functions
Hi George,
performance improvement was the idea. Lets keep the scalars as they are used to distinguish different types of estimation problems. You can leave aside the vector of indices until we encounter the need for them.
Best
Michel
G. Perendia wrote:
Hi
It seems to me that the following values and vectors described on the
(new)
EstimationModule wiki page are in a way redundant as it looks like that
they
can be derived from the parameter_description type vector. If so, is there
any
other reason for keeping them explicitly as part of the structure except that
it
is useful for performance improvement and for easier understanding of the structure?
- nvx integer scalar (number of estimated structural shock standard
deviations). -. nvx_id nvx by 1 vector of integers. -. nvn integer scalar (number of estimated measurement error standard deviations). -. nvn_id nvn by 1 vector of integers. -. ncx integer scalar (number of estimated structural shock
correlations).
-. ncx_id ncx by 1 vector of integers. -. ncn integer scalar (number of estimated measurement error
correlations).
-. ncn_id ncn by 1 vector of integers. -. np integer scalar (number of estimated "deep" parameters). -. np_id np by 1 vector of integers.
Best regards
George
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev
Dev mailing list Dev@dynare.org http://www.dynare.org/cgi-bin/mailman/listinfo/dev