




◆ 
We can usually get a series of data pairs (x_{1},
y_{1}, x_{2,}
y_{2}.....x_{m},
y_{m}), while studying relation
of the two variableness. Depict the data in the right angle
coordinates (as chart 1), if we find the points are near a
line, we can make the straight line variant as (Formula 11).
y = a_{0 }+ a_{1} x (Formula 11) There into, a_{0}, a_{1} and k are arbitrary real numbers 


◆ 
To set the straight line variant, we should ensure a_{0} and a_{1}, make the minimal value of the least square sum of the difference of the real observation value of y_{i }and the computing value of y using (Formula 11) as the optimization superior criterion. Order: Φ = ∑( y_{i}  y )^{ 2 } (Formula 12)
Take
(Formula11) to (Formular12), then we get: Φ = ∑( y_{i}  a_{0 }  a_{1} x )^{ 2 } (Formula 13) 


◆ 
When
the square of ∑( y_{i} 
a_{0 }

a_{1} x )^{
2 }
is the smallest, we can use function
Φ (Formula 14)
Namely: m a_{0} + ( ∑x _{i }) a_{1} = y_{i } (Formula 16) ( ∑x _{i }) a_{0} + ( ∑x _{i }^{2}) a_{1} = ( ∑x _{i }y_{I}) (Formula 17) 


◆ 
That
is,to get two variant groups about the unknown numbers a_{0}
and a_{1}, solve the two groups to . (Formula 18) (Formula 19)



◆ 
Get then get a_{0} and a_{1} into (Formula11), this time the (Formula11) is the regression variant linear equation, which is mathematics model. 


◆ 
During the period of regression, the relating formula of regression can't all be through every regression data point (x_{1}, y_{1}, x_{2,} y_{2}.....x_{m}, y_{m}), to estimate whether the relating formula is right, we can use correlation coefficient R, statistical variable F, and residue standard deviation S to estimate: it is better that R tends to 1, the absolute value of F is bigger and S tends to 0 . (Formula 110) In the (Formula 110) m is sample quantity, that is the experiment times, x_{i }and y_{i} are the numerical value of experiment x and y 





◆ 
When
studying the relating relations between the two variable numbers
(x, y), we can get a series of binate data(x_{1},
y_{1}, x_{2,}
y_{2}.....x_{m},
y_{m}),
describing the data into the x  y orthogonal coordinate system(chart
2), the points are found near a curve. Suppose the
onevariant nonlinear variant of the curve such as (Formula 21),
y = a_{0 }+ a_{1} x_{ }^{k} (Formula 21) There into, a_{0}, a_{1} and k are arbitrary real numbers 

◆ 
To set the curve variant, the numbers of a_{0}, a_{1} and k must be set. Use the same way with "The Least Square Method" Data Regression based on the square sum of the deviation of the true measure value y_{i} and computing value. order: ' Φ = ∑( y_{i}  y )^{ 2 } (Formula 22) Take (Formula21) to (Formula22) to get : Φ = ∑( y_{i}  a_{0 }  a_{1} x^{ k })^{ 2 } (Formula 23) 

◆ 
When
the square of ∑(
y_{i} 
a_{0 }

a_{1} x ^{k})^{
2 }
is the smallest, we can
use
function
(Formula 24) (Formula 25) (Formula 26) 

◆ 
Get three variant groups about a_{0},a_{1} and k which are the unknown numbers, solve the groups can get mathematical model. 

◆ 
Also,
we can judge the right of the mathematical model with the help of
correlation coefficient R, statistical variable F, residual
standard deviation S to judge, it is better that R tends to 1,the
absolute value of F is bigger and S tends to 0. The validation is
good, but error of model computing sometimes are big, to improve
the mathematical model farther, the biggest error, equal error and
the equal relating error of computing model are computed to
validate the model. 



◆ 
Table of Least Cubic Method and Least Square Method compare 

Through the comparison , the optimize the criterion of the Least Square Method data regression and Least Cubic Method ∑( y_{i}  y )^{ 2 }are same, Least Cubic Method accounts the power K , The Least Square Method Data Regression doesn't compute the value of k, and take it as 1. 

◆ 
Least Cubic Method use the value of computing power to make the
model function curves in different rates, to draw up the curves
with different curve rates. It saves the complex ways of setting
mechanism model and disposing linearization and makes the
regression model and data drawing up better. 

◆ 
To the regression of nonlinear data, don't use the variants of
The Least Square Method Data Regression to set models to
objective functions, at the same time the once regression
mathematical model, since the regression , not only consider the
contribution of objective function the variants take, and also
consider the effects among the variants, so as to make the model
correct. 

◆ 
In the Least Square Method Data Regression there is only x, In
the Least Cubic Method Theory, The variant data can have
many variants
x^{k1},
x^{k2}^{
.......}
x^{kn}^{,}
i.e.(Formula 31). With the utility of the character, it can
make the data more correct
y_{ }_{ } = a_{0} + a_{1 } x^{k1} + a_{2 } x^{k2} +...+ a_{n }x ^{kn} (Formula 3  1) 

Model choose  
◆ 
Mechanism Study Method. The method is to study the inner relation during the course. After supposing the course, set the mathematical variant among the relation of data for more than two dimensions. To making the mathematical distortion disposal to the mathematical variant, find the relating variant and objective function, and use the coefficient of the data regression computing mechanism model. Mechanism of the method is suitable: few of data , low of Data accuracy, need mechanism model reparation the deficiencies. 

◆ 
Data Research Method The method is to the two dimensions data, and to make the two dimensions data as the objective function and variant. The change Variant x makes the change of y, the change can be divided into six situations (chart 41)(chart 46).
Firstly, linear increasing, with the increasing of x, the
even speed of y increasing.
Secondly, linear reduce, with the increasing of x, the even
speed of y reduce.
Thirdly, nonlinear increasing, with the increasing of x, the
acceleration of y increasing.
Fourthly, nonlinear increasing, with the increasing of x, the
deceleration of y increasing.
Fifthly, nonlinear reduce, with the increasing of x, the
acceleration of y reduce. Sixthly, nonlinear reduce, with the increasing of x, the deceleration of y reduce. 



◆ 
Suppose the variants of the six situations are: y = a_{0} + a_{1} x^{k }( (Formula 41) In the first situation, when a_{0} > 0 , a_{1} > 0, k = 1 In the second situation, when a_{0} > 0, a_{1} < 0, k = 1 In the third situation, when a_{0 } > 0 , a_{1} > 0, k > 1, k < 0 In the fourth situation, when a_{0} > 0, a_{1} > 0 , 0 < k < 1 In the fifth situation when a_{0} > 0 , a_{1} < 0 , 0 < k < 1 In the sixth situation when a_{0} > 0, a_{1} < 0 , k > 1 , k < 0 Through the above summary, if choose x^{k}, we can select the value of k according to the relation among a_{0},a_{1},a_{2} and k mentioned from Situation 1 to 6. In Situation 3 and 6, the curve concave above, it is similar with exponent curve, we can select exponent form e^{x.} In Situation 4 and 5, the curve protrudes above, it is similar with logarithm, we can select logarithm form Logx) (logarithm fundus is e) 

◆ 
The problem to be noticed since selecting parameters.
1. When one variant datum is 0, the datum can't be used as
divisor and get the logarithm, and we can add a number on the
dimension, and make it bigger than zero.
2. When there is a negative in the datum, the datum can't be
used as regression computing, multiple the dimension datum with
a negative to make it bigger than zero. 3. When get the power of some variant, it can't be too big or small, or the regression computing will intermit, sometimes the model will enlarge the error of computing 