.. _Usage: ===== Usage ===== AutoFeedback offers three high-level functions for checking student code .. code-block:: python from AutoFeedback import check_vars, check_func, check_plot Then use one of the three high level functions available for checking student code. Checking Variables ================== :py:meth:`AutoFeedback.check_vars` .. code-block:: python assert check_vars('x', 3) will check whether the student has defined a variable `x` in main.py to be equal to 3 and print feedback to the screen. Checking Functions ================== :py:meth:`AutoFeedback.check_func` There are two ways to use `check_func`. The first is to define the functions exactly as you wish the students to do. You should also set the `.inputs` attribute of each function as a list of tuples containing some sample inputs that can be used for testing the function: .. code-block:: python def addup(x, y): return x+y addup.inputs = [(3, 4), (5, 6)] def addAndSquare(x, y): z = addup(x, y) return z * z addAndSquare.inputs = [(3, 4)] assert check_func(addup) assert check_func(addAndSquare, calls=['addup']) Although this requires more lines of code than the legacy usage (see below), it removes the need to pre-calculate the expected outputs, ensuring that the results are robust against changes to libraries, environments etc. Legacy usage ------------ The second method is to pass the function name as a string. This is maintained for backwards compatability. In this case the inputs and expected outputs must be passed: .. code-block:: python assert check_func('addup', inputs=[(3, 4), (5, 6)], expected=[7, 11]) will check whether the student has defined a function named addup `addup` in main.py which takes two input arguments, adds them and returns the result. To check whether functions call other named functions, use the `calls` optional argument: .. code-block:: python assert check_func('addAndSquare', inputs=[(3, 4)], expected=[49], calls=['addup']) which checks whether the function `addAndSquare` calls the function `addup` during its execution. Checking Plots ============== :py:meth:`AutoFeedback.check_plot` To check a student's plot object you must first define the 'lines' you expect to see in the plot. The lines are of type :py:meth:`AutoFeedback.plotclass.line` and are defined and tested as follows: .. code-block:: python from AutoFeedback.plotclass import line line1 = line([0,1,2,3], [0,1,4,9], linestyle=['-', 'solid'], colour=['r', 'red', (1.0,0.0,0.0,1)], label='squares') line2 = line([0,1,2,3], [0,1,8,27], linestyle=['--', 'dashed'], colour=['b', 'blue', (0.0,0.0,1.0,1)], label='cubes') assert check_plot([line1, line2], expaxes=[0,3,0,27], explabels=['x', 'y', 'Plot of squares and cubes'] explegend=True) which checks to ensure that both the squared and cubed values are plotted with the correct colour and linestyle, that the legend is shown with the correct labels, and that the axis limits, labels and figure title are set correctly. Checking random variables ========================= AutoFeedback can be used to provide feedback on student code that generates random variables. To get AutoFeedback to test student code for generating random variables you must provide information on the distribution that the student is supposed to sample from. AutoFeedback then uses this information to perform various two-tailed hypothesis tests on the numbers that the student’s codes generates. The null hypothesis for these tests is that the student’s code is generating these random variables correctly. The alternative hypothesis is that the student code is not correctly sampling from the distribution. When using these sorts of tests **there is a finite probablity that the student is told that their code is incorrect even when it is correct** This fact is clearly explained to students in the feedback they receive so these tests can be used if you are asking students complete a formative assessesment task. If you are using AutoFeedback for summative assessment it is probably best not to rely on the marks it gives if your tasks involve random variables. Two types of hypothesis test are performed. In the first the test statistic is: .. math:: T = \frac{ \overline{X} - \mu }{ \sqrt{\sigma^2 / n} } where :math:`\overline{X}` is a sample mean computed from :math:`n` identical random variables. :math:`\mu` and :math:`\sigma^2`, meanwhile, are the expectation and variance of the sampled random variables. Under the null hypothesis (that the student’s code is correct) the test statistic above should be a sample from a standard normal distribution. The second type of hypothesis test uses the following test statistic: .. math:: U = \frac{(n-1)S^2}{\sigma^2} where :math:`S^2` is a sample variance computed from :math:`n` identical and independent random variables. :math:`\sigma^2` is then the variance of the sampled random variables. Under the null hypothesis the test statistic above should be a sample from a chi2 distribution with :math:`(n-1)` degrees of freedom. The examples that follow illustrate how AutoFeedback can be used a range of tasks that you might ask students to perform as part of an elementary course in statistics. Single random variable ---------------------- Suppose you want students to write a code to set a variable called ``var`` so that is a sample from a standard normal random variable. The correct student code would look like this: .. code:: python import numpy as np var = np.random.normal(0,1) You can test this code using ``check_vars`` and ``randomclass`` as follows: .. code:: python from AutoFeedback.varchecks import check_vars from AutoFeedback.randomclass import randomvar # Create a random variable object with expectation 0 and variance 1 to test the student variable against r = randomvar( 0, variance=1 ) # Use check_vars to test the student variable var against the random variable you created assert( check_vars( "var", r ) ) ``check_var`` here calculates the test statistic :math:`T` that was defined earlier using the value the student has given to the variable named ``var`` in place of :math:`\overline{X}`. As the student has calculated only a single random variable :math:`n` is set equal to 1. By a similar logic, if the student is supposed to set the variable ``U`` equal to a uniform continuous random variable that lies between 0 and 1 by using program like this: .. code:: python import numpy as np U = np.random.uniform(0,1) you can test this code using ``check_vars`` and ``randomclass`` as follows: .. code:: python from AutoFeedback.varchecks import check_vars from AutoFeedback.randomclass import randomvar # Create a random variable object with expectation 0.5 and variance 1/12 to test the student variable against r = randomvar( 0.5, variance=1/12, vmin=0, vmax=1 ) # Use check_vars to test the student variable U against the random variable you created assert( check_vars( "U", r ) ) Now, because ``vmin`` and ``vmax`` were set when the random variable was setup, AutoFeedback checks that ``U`` is between 0 and 1 before performing the hypothesis test that was performed on the normal random variable. Fuctions for generating random variables ---------------------------------------- Lets suppose you have set students the task of writing a function for generating a Bernoulli random variable. A correct solution to this problem will look something like this: .. code:: python import numpy as np def bernoulli(p) : if np.random.uniform(0,1)