Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
LTU(3)		      User Contributed Perl Documentation		LTU(3)

NAME
       Statistics::LTU -  An implementation of Linear Threshold	Units

SYNOPSIS
	   use Statistics::LTU;

	   my $acr_ltu = new Statistics::LTU::ACR(3, 1);    # 3	attributes, scaled

	   $ltu->train([1,3,2],	 $LTU_PLUS);
	   $ltu->train([-1,3,0], $LTU_MINUS);
	   ...
	   print "LTU looks like this:\n";
	   $ltu->print;

	   print "[1,5,2] is in	class ";
	   if ($ltu->test([1,5,2]) > $LTU_THRESHOLD) { print "PLUS" }
						else { print "MINUS" };

	   $ltu->save("ACR.saved") or die "Save	failed!";
	   $ltu2 = restore Statistics::LTU("ACR.saved");

EXPORTS
       For readability,	LTU.pm exports three scalar constants: $LTU_PLUS (+1),
       $LTU_MINUS (-1) and $LTU_THRESHOLD (0).

DESCRIPTION
       Statistics::LTU defines methods for creating, destroying, training and
       testing Linear Threshold	Units.	A linear threshold unit	is a 1-layer
       neural network, also called a perceptron.  LTU's	are used to learn
       classifications from examples.

       An LTU learns to	distinguish between two	classes	based on the data
       given to	it.  After training on a number	of examples, the LTU can then
       be used to classify new (unseen)	examples.  Technically,	LTU's learn to
       distinguish two classes by fitting a hyperplane between examples; if
       the examples have n features, the hyperplane will have n	dimensions.
       In general, the LTU's weights will converge to a	define the separating
       hyperplane.

       The LTU.pm file defines an uninstantiable base class, LTU, and four
       other instantiable classes built	on top of LTU.	The four individual
       classes differs in the training rules used:

       ACR - Absolute Correction Rule
       TACR - Thermal Absolute Correction Rule (thermal	annealing)
       LMS - Least Mean	Squares	rule
       RLS - Recursive Least Squares rule

       Each of these training rules behaves somewhat differently.  Exact
       details of how these work are beyond the	scope of this document;	see
       the additional documentation file (ltu.doc) for discussion.

SCALARS
       $LTU_PLUS and $LTU_MINUS	(+1 and	-1, respectively) may be passed	to the
       train method.  $LTU_THRESHOLD (set to zero) may be used to compare
       values returned from the	test method.

METHODS
       Each LTU	has the	following methods:

       new TYPE(n_features, scaling)
	    Creates an LTU of the given	"TYPE".	 "TYPE"	must be	one of:

	    Statistics::LTU::ACR,
	    Statistics::LTU::TACR,
	    Statistics::LTU::LMS,
	    Statistics::LTU::RLS.

	    "n_features" sets the number of attributes in the examples.	 If
	    "scaling" is 1, the	LTU will automatically scale the input
	    features to	the range (-1, +1).  For example:

		    $ACR_ltu = new Statistics::LTU::ACR(5, 1);

	    creates an LTU that	will train using the absolute correction rule.
	    It will have 5 variables and scale features	automatically.

       copy Copies the LTU and returns the copy.

       destroy
	    Destroys the LTU (undefines	its substructures).  This method is
	    kept for compatibility; it's probably sufficient simply to call
	    undef($ltu).

       print
	    Prints a human-readable description	of the LTU, including the
	    weights.

       save(filename)
	    Saves the LTU to the file filename.	 All the weights and necessary
	    permanent data are saved.  Returns 1 if the	LTU was	saved
	    successfully, else 0.

       restore LTU(filename)
	    Static method.  Creates and	returns	a new LTU from filename.  The
	    new	LTU will be of the same	type.

       test(instance)
	    Tests the LTU on instance, the instance vector, which must be a
	    reference to an array.  Returns the	raw (non-thresholded) result.
	    A typical use of this is:

	       if ($ltu->test($instance) >= $LTU_PLUS) {
		  # instance is	in class 1
	       } else {
		  # instance is	in class 2
	       }

       correctly_classifies(instance, realclass)
	    Tests the LTU against an instance vector instance, which must be a
	    reference to an array.  realclass must be a	number.	 Returns 1 if
	    the	LTU classifies instance	in the same class as realclass.
	    Technically: Returns 1 iff instance	is on the realclass side of
	    the	LTU's hyperplane.

       weights
	    Returns a reference	to a copy of the LTU's weights.

       set_origin_restriction(orig)
	    Sets LTU's origin restriction to orig, which should	be 1 or	0.  If
	    an LTU is origin-restricted, its hyperplane	must pass through the
	    origin (ie,	so its intercept is zero).  This is usually used for
	    preference predicates, whose classifications must be symmetrical.

       is_cycling(n)
	    Returns 1 if the LTU's weights seem	to be cycling.	This is	a
	    heuristic test, based on whether the LTU's weights have been
	    pushed out in the past n training instances.  See comments with
	    the	code.

       version
	    Returns the	version	of the LTU implementation.

       In addition to the methods above, each of the four classes of LTU
       defines a train method.	The train method "trains" the LTU that an
       instance	belongs	in a particular	class.	For each train method,
       instance	must be	a reference to an array	of numbers, and	value must be
       a number.  For convenience, two constants are defined: $LTU_PLUS	and
       $LTU_MINUS, set to +1 and -1 respectively.  These can be	given as
       arguments to the	train method.  A typical train call looks like:

	$ltu->train([1,3,-5], $Statistics_LTU_PLUS);

       which trains the	LTU that the instance vector (1,3,-5) should be	in the
       PLUS class.

       o    For	ACR:	  train(instance, value)

	    Returns 1 iff the LTU already classified the instance correctly,
	    else 0.

       o    For	RLS:	  train(instance, value)

	    Returns undef.

       o    For	LMS:	  train(instance, value, rho)

	    Returns 1 if the LTU already classified the	instance correctly,
	    else 0.  Rho determines how	much the weights are adjusted on each
	    training instance.	It must	be a positive number.

       o    For	TACR:	   train(instance, value, temperature, rate)

	    Uses the thermal perceptron	(absolute correction) rule to train
	    the	specified linear threshold unit	on a particular
	    instance_vector.  The instance_vector is a vector of numbers; each
	    number is one attribute. The desired_value should be either
	    $LTU_PLUS (for positive instances) or $LTU_MINUS (for negative
	    instances).	 The temperature and rate must be floating point
	    numbers.

	    This method	returns	1 if the linear	threshold unit already
	    classified the instance correctly, otherwise it returns 0.	The
	    TACR rule only trains on instances that it does not	already
	    classify correctly.

AUTHOR
       fawcett@nynexst.com (Tom	Fawcett)

       LTU.pm is based on a C implementation by	James Callan at	the University
       of Massachusetts.  His version has been in use for a long time, is
       stable, and seems to be bug-free.  This Perl module was created by Tom
       Fawcett,	and any	bugs you find were probably introduced in translation.
       Send bugs, comments and suggestions to fawcett@nynexst.com.

BUGS
	    None known.	 This Perl module has been moderately exercised	but I
	    don't guarantee anything.

perl v5.24.1			  1997-02-27				LTU(3)

NAME | SYNOPSIS | EXPORTS | DESCRIPTION | SCALARS | METHODS | AUTHOR | BUGS

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=Statistics::LTU&sektion=3&manpath=FreeBSD+12.1-RELEASE+and+Ports>

home | help