This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(March 2019) |
Gianni Bellocchi (born July 22, 1969) is a researcher in agricultural and related sciences. He is credited with the development of approaches and tools in validation of estimates and measurements. Introduction of fuzzy logic in the context of validation is often considered to be the most significant contribution to the field of model and method validation. [1] [2]
He also helped spark the validation issues in agro-ecological modelling and analytical methods through his reviews of specialistic literature. His approach to the aggregation of multiple validation metrics has influenced the way validation results are viewed. In this respect he is credited with the establishment of hierarchy of preferences in validation and classification of models and methods in terms of aggregated, weighted metrics.[ citation needed ]
Gianni Bellocchi was born in Acquapendente and spent formative years in San Lorenzo Nuovo, in province of Viterbo (Italy), near Lago di Bolsena. His parents, Giuseppe (born 1944) and Adriana (born 1946), are retired farmers. He attended at primary and middle schools in his village, and at agricultural high school Archived 2007-03-02 at the Wayback Machine in Bagnoregio. Starting in 1988, he studied agricultural sciences at the University of Pisa and at the Sant'Anna School of Advanced Studies of Pisa. He graduated in 1993 and got a PhD in 1997. He learned statistical data processing and modelling approaches from entomologist Fabio Quaglia, physicist Franco Martorana , and modellers Frits W.T. Penning de Vries (Wageningen University, Netherlands) and Claudio O. Stockle (Washington State University, Pullman, Washington, U.S.). Gianni joined the staff of agronomy and agro-meteorology modelers of the Research Institute for Industrial Crops Archived 2006-02-10 at the Wayback Machine of Bologna in 1999, and developed a large number of scientific contributions under the leadership of Marcello Donatelli Archived 2006-05-06 at the Wayback Machine . In 2006–2009 he was appointed contractual agent at European Commission - Joint Research Centre of Ispra, Italy. In 2010, he became senior scientist at French National Institute for Agricultural Research (INRA), Grassland Ecosystem Research Unit (UREP) and, as of February 1, 2014, research director. In 2011, January 25, he got the Habilitation à Diriger des Recherches from Blaise Pascal University of Clermont-Ferrand (France). He joined as member of the Italian Society for Agronomy , the European Society for Agronomy Archived 2006-02-09 at the Wayback Machine , the American Society of Agronomy , and the International Environmental Modelling and Software Society Archived 2009-01-30 at the Wayback Machine . As affiliated to the Monte Pino Met European Research Observatory , he is co-editor of the book "Storminess and Environmental Change" (Diodato and Bellocchi, 2014, Springer). He coordinated in 2013-2016 the project MODEXTREME of the 7th Framework Programme funded by the European Union.
Bellocchi's work in validation has had implications for model and analytical method assessment. Genuine insights in model results, as well as results from an analytical method, imply concomitant understandings of multiple aspects of quality assessment to be taken into account and formalized. The fuzzy set theory formalized by Professor Lofti Zadeh at the University of California in 1965 was pointed out as having a direct use to assess numerical outcomes for its ability to aggregate multiple, possibly contradictory, evaluation measures. Many of the more basic principles of this theory are now generally accepted in many areas. Its application in a context of validation opened up to a new way to investigate results from a modelling process or an analytical method. In 2001, Bellocchi and co-workers firstly introduced the possibility to use fuzzy logic to evaluate model estimates at the Second International Symposium on Modelling Cropping Systems (Florence, Italy), and in 2002 the same approach was internationally acknowledged. [1] Further extensions and applications followed (as reported in Authorship).
The procedure based on the multi-valued fuzzy set introduced by Professor Lofti Zadeh, follows the Sugeno method of fuzzy inference. [3] Three membership classes are basically defined for all metrics used in the validation work, according to an expert judgment, i.e. Favorable (F), Unfavorable (U), and partial (or fuzzy) membership, using S-shaped curves as transition possibilities in the range F to U:
where: x = the value of the basic input; a = the lower bound of the transition interval [min(F, U)]; b = the upper bound of the transition interval [max(F, U)]; c = (a + b) /2. According to the equation, if a = F, then x ≤ a means x = F, and S(x;a;b) gives the degree of membership of the index value x to the set U. Its complement, 1 - S(x;a;b), gives the degree of membership of the index value x to the set F.
A two-stage design of a fuzzy-based rules' inferring system is applied where firstly inputs with similar characteristics are aggregated into modules and then, using the same procedure, the modules can be aggregated into a second level integrated index called indicator. Both modules and indicator range from 0 to 1.
The control rules for estimating module values are based on logic relationships between inputs and outputs, expressed in linguistic terms by 'if-then' statements. For example, when two input variables (validation metrics) are aggregated four rules are required, formalized as:
____PREMISE____CONCLUSION
if x1 is F and x2 is F then yi is B1
if x1 is F and x2 is U then y2 is B2
if x1 is U and x2 is F then y3 is B3
if x1 is U and x2 is U then y4 is B4
where xi is an input variable, yi is an output variable and Bi is a conclusion (or expert weight). The value of each conjunction (… and …) is the minimum of the quantified fuzzy groups, which are obtained from complementary S-shaped distribution curves.
The output fuzzy sets for all the rules are then aggregated into a single fuzzy set. This group encompasses a range of output values, and is de-fuzzified in order to resolve a single crisp output value from the group (i.e. a value between 0 and 1). This approach uses the centroid method to obtain the representative non-fuzzy value for the output, as commonly adopted in the Sugeno-type systems. The expert reasoning runs as follows: if all input variables are F, the value of the module is 0 (good response according to all metrics used); if all indices are U, the value of the module is 1 (bad response according to all inputs used), while all the other combinations assume intermediate values. Limits F and U may come from experience, may be extracted from literature, or may be set by law. The weights can be chosen based on the analyst own experience in handling each input.
See a full bibliography on Bellocchi's web site .
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system can be divided and allocated to different sources of uncertainty in its inputs. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning.
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis.
In mathematics, statistics, finance, and computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting.
In soil science, pedotransfer functions (PTF) are predictive functions of certain soil properties using data from soil surveys.
The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.
A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units, or where measurements are made on clusters of related statistical units. Mixed models are often preferred over traditional analysis of variance regression models because of their flexibility in dealing with missing values and uneven spacing of repeated measurements. The Mixed model analysis allows measurements to be explicitly modeled in a wider variety of correlation and variance-covariance structures.
Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V".
Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric optimization of models.
Suitability analysis is the process and procedures used to establish the suitability of a system – that is, the ability of a system to meet the needs of a stakeholder or other user.
The decision-making paradox is a phenomenon related to decision-making and the quest for determining reliable decision-making methods. It was first described by Triantaphyllou, and has been recognized in the related literature as a fundamental paradox in multi-criteria decision analysis (MCDA), multi-criteria decision making (MCDM) and decision analysis since then.
In decision-making, a rank reversal is a change in the rank ordering of the preferability of alternative possible decisions when, for example, the method of choosing changes or the set of other available alternatives changes. The issue of rank reversals lies at the heart of many debates in decision-making and multi-criteria decision-making, in particular.
A probability box is a characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties that is often used in risk analysis or quantitative uncertainty modeling where numerical calculations must be performed. Probability bounds analysis is used to make arithmetic and logical calculations with p-boxes.
Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.
In probability theory, the matrix analytic method is a technique to compute the stationary probability distribution of a Markov chain which has a repeating structure and a state space which grows unboundedly in no more than one dimension. Such models are often described as M/G/1 type Markov chains because they can describe transitions in an M/G/1 queue. The method is a more complicated version of the matrix geometric method and is the classical solution method for M/G/1 chains.
Modelling frameworks are used in modelling and simulation and can consist of a software infrastructure to develop and run mathematical models. They have provided a substantial step forward in the area of biophysical modelling with respect to monolithic implementations. The separation of algorithms from data, the reusability of I/O procedures and integration services, and the isolation of modelling solutions in discrete units has brought a solid advantage in the development of simulation systems. Modelling frameworks for agriculture have evolved over time, with different approaches and targets
ANSI/ASHRAE Standard 55: Thermal Environmental Conditions for Human Occupancy is an American National Standard published by ASHRAE that establishes the ranges of indoor environmental conditions to achieve acceptable thermal comfort for occupants of buildings. It was first published in 1966, and since 2004 has been updated every three to six years. The most recent version of the standard was published in 2023.
SmartPLS is a software with graphical user interface for variance-based structural equation modeling (SEM) using the partial least squares (PLS) path modeling method. Users can estimate models with their data by using basic PLS-SEM, weighted PLS-SEM (WPLS), consistent PLS-SEM (PLSc-SEM), and sumscores regression algorithms. The software computes standard results assessment criteria and it supports additional statistical analyses . Since SmartPLS is programmed in Java, it can be executed and run on different computer operating systems such as Windows and Mac.
ACTRAN is a finite element-based computer aided engineering software modeling the acoustic behavior of mechanical systems and parts. Actran is being developed by Free Field Technologies, a Belgian software company founded in 1998 by Jean-Pierre Coyette and Jean-Louis Migeot. Free Field Technologies is a wholly owned subsidiary of the MSC Software Corporation since 2011. Free Field Technologies and MSC Software are part of Hexagon AB since 2017.
WindStation is a wind energy software which uses computational fluid dynamics (CFD) to conduct wind resource assessments in complex terrain. The physical background and its numerical implementation are described in. and the official manual of the software.