From root@crcnis1.unl.edu Mon Jul 3 20:40 EDT 1995
Date: Mon, 3 Jul 1995 19:39:54 -0500
Message-Id: <9507040038.AA06732@sunsite.oit.unc.edu>
From: listserv@unl.edu
Subject: GET AGMODELS-L LOG9403

Archive AGMODELS-L: file log9403, part 1/1, size 29888 bytes:

------------------------------ Cut here ------------------------------


From glyn_rimmington@muwayf.unimelb.edu.au Tue Mar 1 00:17:13 1994
Date: Mon, 28 Feb 1994 14:17:13 +1000
From: Glyn Rimmington <glyn_rimmington@muwayf.unimelb.edu.au>
Subject: Re: AGMODELS-L digest 59
Message-Id: <01H9F78PTM8Y005NF0@muwayb.ucs.unimelb.edu.au>

Department of Agriculture 28/2/94 13:49
SUBJECT: RE>AGMODELS-L digest 59
Don Wauchope writes....
>>I have read the paper by Oreskes, Schrader-Frechette and Belitz in SCIENCE
>>(vol 263 pp 641-646 94) and it seems to me you could summarize it as
follows:
>> -- all models are imperfect representatiions of truth
>> -- to "verify" a model means to demonstrate that it is true
>> -- to "validate" a model means to demonstrate that it is internally self-
>> consistent and (I THINK they say this) that it can be shown to mimic
>> reality for some specific case
>>Using these two definitions they then argue that no model is verifiable or
>>validatable. I don't know any informed individual in the known universe
that
>>would disagree with their argument about verification.

The corrollary of the first is that only reality can be verified - fair
enough.

>>As for validation, I am reminded of the philosopher who, when shown a field
>>of spotted cows said "well, of course we only know that they are spotted on
this >>side."

According to the Shrader-Frechette & Belitz statement above, there are 2
parts to validation - (i) comparing model behaviour with real system
behaviour at the phenomenological level e.g. leaf area, biomass and yield of
a crop and (ii) qualitiative analysis of the internal structure of the model
(are we getting the right answers for the right reasons?).

It is a tough thing to do when you have put a lot of effort into a model, but
it has toe done. You have to compare its output for given inputs with data
that have been collected truly independently of the data used to derive the
model coefficients. Many of us lose our objectivity and run out of time on
short term projects to do this properly and are not pleased when we get the
observed vs. predicted plots and low values for r^2. Nevertheless this tells
something about model performance.

The internal consistency problem is another issue. Many papers describing
models now are light on in the independent validation department, but worse,
they are incredibly cryptic when it comes to description of the internal
workings. Often when you ask for more detail, you get a FORTRAN listing or
you are told ..."that information is commercial and you can't have it unless
you pay us $xx,xxxx ..."
I think we were on the right track with CSMP, ACSL and SIMCOMP, but have
taken a significant step backwards over the last 15 to 20 years by coding in
languages like FORTRAN. The former allow, to some degree, a heirarchical
overview of the model and in some cases (e.g. Stella) have a Forester
diagram-like visual metaphor as in the earlier work of the de Wit et al.
group.

"Modular" or "structured" thinking has been mistakenly replaced by modular or
structured programming and the true nature of many models has become obscured
in a mire of spagetti code which even the authors aren't too sure about. I
think we had a better idea of what we were doing then (the CSMP era) than we
do now.

That's my "two bob's worth" on the topic. I've probably upset a few people
by now. Let's have a debate.

Glyn Rimmington



From R.MATTHEWS@CGNET.COM Mon Feb 28 23:01:38 1994
Date: Tue, 1 Mar 1994 12:07 PHL (GMT +8:00)
From: Robin Matthews <R.MATTHEWS@CGNET.COM>
Subject: Introduction ...
Message-Id: <01H9GGZDHDZK00055X@irri.cgnet.com>

I sent an introduction email a few weeks ago, but didn't see it
appear, so maybe I sent it to the wrong address or something.

I am working at the International Rice Research Institute (IRRI) on
modelling of irrigated rice. My position is as theme coordinator in
the SARP network coordinating modelling activities in 15 National
Agricultural Research Centres in South East Asia. Research interests
are in climate change effects on regional rice production, genotype x
envt interactions, tiller formation, and sink/size relationships.
Present employer is AB-DLO, Wageningen, Netherlands. Previously, I
worked on developing the IBSNAT cassava model at Guelph, Canada.

I agreed very much with the comments of John Bolte (22-FEB-1994) on
the use of object-oriented code for models and their interfaces. Here,
we are following a similar path of developing models and tools
(associated with running the models and analysing the output) as
objects which can then be combined in various ways depending on the
needs of the user. I am using Turbo Pascal; all objects are written to
be compatible with both DOS and Windows. Interfaces specific to DOS or
Windows then sit on top of the objects and 'coordinate' their
activity, depending on the user. The interfaces can actually be in any
language - one recent application using these objects is written in
FoxPro. I think if modellers can start thinking in terms of objects
that can do various tasks, some of the (unfortunately true) comments
of Glyn Rimmington (28-FEB-1994) about unstructured and "spagetti"
programming can be dealt with. With inheritance and polymorphism, code
becomes much more reusable, so that we don't have to keep on
reinventing the wheel for each new application that is written. By the
way, does anyone out there know of any plans to make Fortran object-
oriented? (Not that I like Fortran, but a lot of modellers still use
it!).

One thing that has made life much easier in developing these objects
is the use of a standard format for experimental data, and model input
and output. We are using the DSSAT v3.0 format developed by the IBSNAT
group, mainly because it is fairly well documented, and similar to the
format used by many experimenters. We have developed objects for
reading the different input files (management, crop parameter, soil
parameter, and experimental data files), each with a lot of error-
trapping (and range-checking for some of the variables) built in, and
others for reformatting into the format used by the Wageningen series
of models and the DSSAT v2.1/3.0 series. Our Biometrics Department is
also very interested in developing data entry and statistical tools
around the DSSAT standard. My view is that modelling will really
progress if all sorts of tools compatible with a standard format can
be developed by various groups throughout the world and made available
to other modellers. I guess that a corollary of this is that the
source code can also be made available; maybe that is too idealistic!

I would be interested to hear of any comments that people might have.

===============================================================================
Robin Matthews Tel: +63 2 818-1926 (ext.436)
International Rice Research Institute Fax: +63 2 818-2087
P O Box 933 Email: R.MATTHEWS@CGNET.COM
1099 Manila Tlx: (ITT) 45365 RICE PM
PHILIPPINES.
===============================================================================



From wallach@ossau.toulouse.inra.fr Tue Mar 1 12:32:24 1994
Date: Tue, 1 Mar 94 11:32:24 +0100
From: wallach@ossau.toulouse.inra.fr (Daniel Wallach)
Message-Id: <9403011032.AA12119@ossau.toulouse.inra.fr>
Subject: model evaluation

I would like to make a comment about model validation and verification.

We here at Toulouse have taken a very pragmatic approach to the
question. I think everybody agrees that the way to test a model
depends on what the model is going to be used for. We have considered
one particular type of use - use of the model for yield prediction for
some defined range of conditions. In that case the question is not
whether or not the model is true, but rather how well does the model
predict. Then the first question is, what is a reasonable criterion
for judging predictive quality. We use the mean squared error of
prediction, because it's convenient and well-known in statistics.
Also, it is really closely related to the problem of interest, since
it really measures how good a predictor the model is. The second
problem is how to estimate the value of this criterion. There again,
statistics furnishes a range of methods, assuming one has some data
for comparing predictions and observations.

The nice thing about this criterion is that it provides a continuous
scale for evaluating a model. We don't have to choose between saying
the model is invalid (so throw it out), or the model is valid (so stop
trying to improve it). Rather, we associate a number with the model.
And if someone proposes a new model, we can see if it's better than
the old one.

If you are interested in this approach, see Biometrics 43, 561-573,
Ecological Modelling 44, 299-306 or Colson et al. Agronomy J. in
press.


From agm@msor0.ex.ac.uk Tue Mar 1 09:06:55 1994
From: Alan Munford <agm@msor0.ex.ac.uk>
Date: Tue, 1 Mar 94 09:06:55 GMT
Message-Id: <11118.9403010906@msor0.msor.exeter.ac.uk>
Subject:

unsubscribe

--
Alan Munford JANET: agm@uk.ac.exeter.msor
MSOR Dept, Exeter University BITNET: agm%uk.ac.exeter.msor@ukacrl
Laver Building, North Park Road, Tel: +44 392 264470 (home 215680)
EXETER, UK. EX4 4QE Fax: +44 392 264460



From lwu@soils.umn.edu Tue Mar 1 04:06:02 1994
From: "Laosheng Wu" <lwu@soils.umn.edu>
Date: Tue, 1 Mar 94 10:06:02 CST
Message-Id: <1905.lwu@soils.umn.edu_POPMail/PC_3.2.3_Beta_2>
Subject: Re: model evaluation

> ......
>In that case the question is not whether or not the model is true, but
>rather how well does the model predict.

IF I UNDERSTAND IT CORRECTLY, I strongly agree with Dr. Daniel Wallach's
comment. I think it is understandable to everyone involved in modeling
work that verification, validation, or confirmation means the same thing -
testing or evaluating the performance of a model. It's just a matter of
wording. There is nothing to do with "The problem of 'Truth' (Science 263:
641-646)".

L. Wu/Univ. of Minnesota


From jon@gpsrv1.gpsr.colostate.edu Tue Mar 1 02:34:27 1994
Date: Tue, 01 Mar 1994 09:34:27 MST
From: "Jon D. Hanson, (303)490-8323" <jon@gpsrv1.gpsr.colostate.edu>
Message-Id: <0097AC6E.A3C53720.24473@gpsrv1.gpsr.colostate.edu>
Subject: Re: Model Validation Stuff

I find I must make two points concerning model development and validation.

1. If model verification means to check it against "reality" then not
only is model verification impossible, but the verification of our
very existence cannot be verified. As the idealists argue, my
reality is not the same as your reality; my perceptions are not the
same as your perceptions. So, who is right? The very notion of
reality has no absolutes and therefore can not be scientifically
verified. Models are our perception of reality. I, therefore,
propose that verification is merely the process of determining that
the code we wrote to represent some "real" system functions in the
way we intended it to function. In other words, does the model do
what it was intended to do and work in the way it was intended to
work?

2. I get tired of hearing people say that if a programmer uses FORTRAN,
there is no way to work with structure or modularity. Modeling is
is the entire process of analyzing a system, simplifying the system
so the human mind can understand it, working out an algorithm for
simulating that system, and then writing a computer code (if that is
the goal of the programmer). Subsequent code evaluation, enhancment,
and testing of the computer code should be considered part of the
code development--making the code do what we want it to do. I
realized this is a simpletons approach, but my point is that actual
coding is only a small part of the process. I can code in many
languages. All languages have their strengths and weaknesses.
I can and do program modularly and structurally in FORTRAN. I also
am quite adept at writing spaghetti code in C++. I find that the
theoretical and design work is much more critical than the language
I chose to use.

Thanks for listening to my ramblings.

+---------------------------------------++---------------------------------+
| Dr. Jon D. Hanson || Comm: (303)490-8323 |
| USDA, Agricultural Research Service || Fax: (303)490-8310 |
| Great Plains Systems Research || jon@gpsrv1.gpsr.colostate.edu |
| 301 S. Howes, P.O. Box E || FTS2000: a03jonhanson |
| Fort Collins, Colorado 80522 || |
+---------------------------------------++---------------------------------+


From coopl@BCC.ORST.EDU Tue Mar 1 01:45:03 1994
Date: Tue, 1 Mar 1994 09:45:03 -0800 (PST)
From: Leonard Coop <coopl@BCC.ORST.EDU>
Subject: Re: Model Validation Stuff
In-Reply-To: <0097AC6E.A3C53720.24473@gpsrv1.gpsr.colostate.edu>
Message-Id: <Pine.3.07.9403010901.B477-a100000@ava.bcc.orst.edu>

Introduction:
Leonard Coop, Research Associate
Entomology Dept. Oregon State University
Pest and Crop loss modeling off and on
for 10 years.

I must enter into this stuff. I always have assumed these definitions:

Verification - the model works correctly as intended for data set #1
Validation - the model performs according to objective criteria for
data set #2 (different than #1)

I know keeping it simple may cut down some of this very interesting
discussion, but I'll take simplicity any time.



From wallach@ossau.toulouse.inra.fr Wed Mar 2 12:47:02 1994
Date: Wed, 2 Mar 94 11:47:02 +0100
From: wallach@ossau.toulouse.inra.fr (Daniel Wallach)
Message-Id: <9403021047.AA15140@ossau.toulouse.inra.fr>
Subject: Re: Model Validation Stuff

>
> Introduction:
> Leonard Coop, Research Associate
> Entomology Dept. Oregon State University
> Pest and Crop loss modeling off and on
> for 10 years.
>
> I must enter into this stuff. I always have assumed these definitions:
>
> Verification - the model works correctly as intended for data set #1
> Validation - the model performs according to objective criteria for
> data set #2 (different than #1)
>
> I know keeping it simple may cut down some of this very interesting
> discussion, but I'll take simplicity any time.
>
These definitions are short, but I'm not sure that they are really
that simple. They suppose that we have 2 data sets. Are they from the same
population (i.e. set of conditions, for instance corn in a particular region
subject to standard management practices), or different populations
(e. g. one from controlled experiments,
another from field trials). If they are from different populations,
which is the population of interest? If it is population 2, then it seems
odd to develop the model for population 1, and just use population 2 to test
the model. Why not use this (usually expensive) information from
population 2 to improve the model? If the population of interest is population
1, then why test the model for population 2?

Or suppose that both data sets represent the same population. Then we are just
splitting the data, using part for model development and part for testing.
That is o.k. if there is lots of data. But very often, the amount of
data is limited. Then again it is a pity not to use all the data
for model development. Normally, that will after all give a better model.

So basically, I have two objections to the proposed definitions. First
of all, it does not seem to be a good idea to define validation
in terms of data sets. A model is usually meant to apply to some
population of conditions, and model evaluation should refer to that
population. Secondly, it is a pity to build data splitting into the
definition of model evaluation, since this may not be a good idea
if the total amount of data is limited. There are statistical
techniques (bootstrap, cross-validation) that allow ont to use
all the data for model development, and still have a reasonable
estimator of model predictive quality.

Daniel Wallach
INRA Biometrie
Toulouse France
email: wallach@toulouse.inra.fr


From m_olivei@utad3.utad.pt Wed Mar 2 17:40:31 1994
Date: Wed, 2 Mar 94 17:40:31 GMT
From: m_olivei@utad3.utad.pt (Manuel Oliveira)
Message-Id: <9403021740.AA08905@utad3.utad.pt>

sub



From jon@gpsrv1.gpsr.colostate.edu Wed Mar 2 03:21:36 1994
Date: Wed, 02 Mar 1994 10:21:36 MST
From: "Jon D. Hanson, (303)490-8323" <jon@gpsrv1.gpsr.colostate.edu>
Message-Id: <0097AD3E.64586000.25074@gpsrv1.gpsr.colostate.edu>
Subject: Small Ruminant Model

If anyone knows of a small ruminant model, i.e. sheep and goat, that is a
herd class model other than the Texas A&M Model, please let me know at

barry@gpsrv1.gpsr.colostate.edu

Thanks,

Barry Baker



From jp@unlinfo.unl.edu Mon Mar 7 10:17:31 1994
From: jp@unlinfo.unl.edu (jerome pier)
Message-Id: <9403072217.AA22963@unlinfo.unl.edu>
Subject: February 1994 archives now available
Date: Mon, 7 Mar 1994 16:17:31 -0600 (CST)

Dear List Subscribers,

Just a quick reminder that you can now get the archive
files for all the posts to the lists for the month of February
1994. This month was a very interesting one on Agmodels-l and the
model validation debate will hopefully rage on! List subscriber
numbers have levelled off for the two lists at around 250 each!!
An advertisement in Agronomy Journal for the lists should be in
the next issue which could add to the number of subscribers. The
more the merrier!

To get a copy of the february 94 archives, send the following
email to listserv@unl.edu:

get agmodels-l log0294

for the agmodels-l archive or:

get soils-l log0294

for soils-l archive.

Once again, my role as List owner is to facilitate the use of
these lists as a tool for exchange of information and ideas
pertaining to soils and agricultural models. If any of you need
further assistance, let me know and I will be glad to help!

Sincerely,

Jerome Pier
Agmodels-l and Soils-l List Owner
jp@unl.edu



From mpvayssieres@ucdavis.edu Tue Mar 8 05:38:24 1994
Date: Tue, 8 Mar 1994 13:38:24 -0800 (PST)
From: Marc Vayssieres <mpvayssieres@ucdavis.edu>
Subject: Another introduction
In-Reply-To: <9403072217.AA22963@unlinfo.unl.edu>
Message-Id: <Pine.3.89.9403081301.A5352-0100000@cassatt.ucdavis.edu>

(I have tried to send this earlier, but I think it did not go through)

Hello to all,
I am a graduate student in agro-ecology with interests in agricultural
systems, rangelands and natural resources management. For my master
thesis, I have worked on ELMAGE, a dynamic simulation model of the California
annual grassland (ELMAGE 92). The basic structure of ELMAGE is an
adaptation to our annual plant dynamics of ELM 73, a model of the
shortgrass prairie constructed in the US/IBP Grassland Biome. At present,
I am involved in an effort to link expert systems and GIS to model the
response of California oak woodlands to fire, grazing and wood-cutting,
at the landscape level.
I am now using AI methods because I believe that models based on
the qualitative knowledge of field experimenters and expert practitioners
are potentially better at generating useful hypotheses and guiding
management at the ecosystem level than traditional mechanistic models.
However, I retain an interest in various kinds of modeling and look forward
to discuss methods, problems and usefulness of modeling in agriculture.

==============================================================================
Marc Vayssieres Internet: mpvayssieres@ucdavis.edu
Department of Agronomy and Range Science Bitnet: mpvayssieres@ucdavis.bitnet
University of California at Davis
DAVIS CA. 95616 (USA)
==============================================================================


From jp@unlinfo.unl.edu Tue Mar 8 10:28:09 1994
From: jp@unlinfo.unl.edu (jerome pier)
Message-Id: <9403082228.AA03099@unlinfo.unl.edu>
Subject: Oops! sorry!
Date: Tue, 8 Mar 1994 16:28:09 -0600 (CST)

Dear list subscribers,

I apologize for an error which I posted regarding the
retrieval of the february archives for the agmodels-l and
soils-l lists. I would like to thank Steve Modena and Bruce Curry
for pointing out to me that the archive file names are actually

log9402 and NOT log0294 as I erroneously posted. So everyone try
this command:

get soils-l log9402

-or-

get agmodels-l log9402

to get copies of the February archives for the respective lists.
Once again pardon the mistake and thanks for pointing it out.

Jerome Pier
List Owner
jp@unl.edu



From jp@unlinfo.unl.edu Wed Mar 9 10:44:04 1994
From: jp@unlinfo.unl.edu (jerome pier)
Message-Id: <9403092244.AA05488@unlinfo.unl.edu>
Subject: Just a reminder...Post commands to listserv
Date: Wed, 9 Mar 1994 16:44:04 -0600 (CST)

Dear subscribers,

As many of you have noticed, following my post regarding
how to retrieve the list archives, there have been a lot of
listserv commands posted to the mail list. I would just like to
remind subscribers that anything which is a list command, such as
get listname log9402, set listname mail digest, etc. should be
sent to the address listserv@unl.edu (the same address you
subscribed to) and _not_ to the discussion list(s). I will admit
to you that this system takes some getting used to and is not
'intuitively obvious to the casual observer' but thats the way it
works for now. Please let me know if you are having trouble
understanding this and I will be more explicit.

Sincerely,

Jerome Pier
Agmodels-l and Soils-l List Owner
jp@unl.edu



From flick@unixg.ubc.ca Wed Mar 23 07:07:18 1994
Date: Wed, 23 Mar 1994 15:07:18 -0800 (PST)
From: Robert Flick <flick@unixg.ubc.ca>
Subject: Modelling with fuzzy sets
Message-Id: <Pine.3.05.9403231518.A21603-a100000@netinfo.ubc.ca>

Hello!

I have been researching the application of fuzzy sets and systems
to the modelling of uncertainty in forest and agricultural systems.
My current application uses fuzzy dynamic programming for sequential
crop or fallow decisions in dryland wheat production. The dynamics,
states (soil moisture), goals and constraints are represented with
fuzzy sets.

I am wondering if there others out there that have looked at using
fuzzy logic for agricultural modelling. I have a few papers on
soil classification using fuzzy sets, and some on fuzzy sets for
forest planning. Anyone else? If so, please let me know (e-mail
address below).

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Robert J. Flick, Agricultural Economics, UBC |
| e-mail: flick@unixg.ubc.ca |
| phone: 604-327-9854 |
| addr: 6454 Argyle St. Vancouver, B.C. V5P-3K3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+



From brianf21@aol.com Sun Mar 27 10:04:45 1994
From: brianf21@aol.com
Message-Id: <9403271504.tn166201@aol.com>
Date: Sun, 27 Mar 94 15:04:45 EST
Subject: Re: Modelling with fuzzy sets

I am not currently interested in fuzzy logic for agricultural modelling, but
if I run into anyone who is, I will give them your address if you don't mind.


From flick@unixg.ubc.ca Sun Mar 27 04:05:49 1994
Date: Sun, 27 Mar 1994 12:05:49 -0800 (PST)
From: Robert Flick <flick@unixg.ubc.ca>
Subject: testing my reception of agmodels-l
Message-Id: <Pine.3.05.9403271249.A19309-5100000@unixg.ubc.ca>

just a test.
------------------------------ Cut here ------------------------------



Prepared by Steve Modena AB4EL modena@SunSITE.unc.edu