Hi Michel and Sébastien,

The implementation I had chosen for SvarIdentificationStatement (modified by the conversation regarding the maximum lag) is

class SvarIdentificationStatement : public Statement
{
public:
  typedef map<pair<int, int>, vector<string> > svar_identification_exclusion_type;
private:
  const svar_identification_exclusion_type exclusion;
  const bool upper_cholesky_present;
  const bool lower_cholesky_present;
  const SymbolTable &symbol_table;
  int get_max_lag() const;
public:
  SvarIdentificationStatement(const svar_identification_exclusion_type &exclusion_arg,
                              const bool &upper_cholesky_present_arg,
                              const bool &lower_cholesky_present_arg,
                              const SymbolTable &symbol_table_arg);
  virtual void writeOutput(ostream &output, const string &basename) const;
};

The map above stores pair<lag number, equation number>, vector of restrictions>

The vector<string> type is similar to the SymbolList type. I am not using SymbolList because this provides direct access to the vector elements for writing.

I am sorry, but I do not quite see the purpose of fillEvalContext from InitParamStatement or how it would be useful in this case. Why can't I simply take the information in and then write the output as described on the Wiki?

Again, I'm so terribly sorry for the confusion on this matter and for needing things explained in so much detail.

Best,
Houtan



2009/12/4 Sébastien Villemot <sebastien.villemot@ens.fr>
Le vendredi 04 décembre 2009 à 09:05 +0100, Michel Juillard a écrit :
> Hi Houtan,
> > Dear Michel and Sébastien,
> >
> > Everything is coded and tested for the svar_identification. However, I
> > want to check my implementation of the maximum lag length (r) with
> > you. In order to obtain r, I have introduced a temporary variable
> > entitled max_endo_lag in ParsingDriver, which updates this variable
> > every time add_model_variable() is called. Is this the best way to do
> > this or can you think of a better way. I should point out that it is
> > my understanding that the variables entitled like max_endo_lag in
> > DynamicModel do not actually contain the original maximum endogenous
> > lag in the model (which is what I understand should be the value for
> > r). Otherwise, I would have simply included dynamic_model in the
> > SvarIdentificationStatement class.
> >
> this is true. (S)VAR will be the only type of models that are left with
> lags on more than one period.

Actually for deterministic models we don't remove leads and lags of more
than one period. Currently, the transformation is only applied for
stochastic models.

What I don't understand is why you rely on add_model_variable() for
computing a maximum endo lag, since as Michel points it, there is no
model block when doing (S)VAR... If this means that you store
VariableNodes inside SvarIdentificationStatement, this is probably a bad
idea: you'd better symbol ids and leads/lags directly as integers.

Otherwise, to be more precise, what are you computing exactly? Since
there is no model block, there is no concept of model maximum endogenous
lag.

If you mean the max endo lag inside the svar_identification block, then
this should be computed by a method inside the
SvarIdentificationStatement class, depending on when you need that
information... Doing this during the parsing pass breaks the idea of
clearly separating the various tasks.

Sorry if I am missing something

Best


--
Sébastien Villemot