When pet owners and trainers think about the Treat&Train®, generally what comes to mind is how they can use the program for training their pet to behave better or for enrichment and training fun tricks. Another completely different but equally cool area where the Treat&Train is being used is in animal behavior research and as a teaching tool for animal behavior students!
For instance, one professor, Dr. Christy Hoffman at Canisius College in Buffalo, New York, uses her dog, Santiago, and the Treat&Train in her class called Research Methods in Animal Behavior.
Canisius College is one of a handful of schools in the US that offers an undergraduate major in animal behavior. Consequently, it’s one of the few where students get to actively learn how to scientifically approach interesting questions about behavior and perception in animals.
Animal Behavior Students Learn About Research and Observation Skills
“Students in the research methods course learn how to collect and analyze observational and experimental data,” says Hoffman, “and in one assignment, called the ‘squares experiment,’ students try to experimentally test my dog Santiago’s visual perceptual abilities and compare them with their own.”
In the experiment, Santiago sits on a rug and faces the experimenter who is about 10–15 feet away and has two foam boards—one placed on each side of her. One board is solid black , and the other is black with a white square in the middle. When Hoffman tells Santiago “square,” he goes over to the board with the white square and touches the square with his nose. If he touches the correct board, then the student presses the Treat&Train remote, which she can do while completely stationary, and then Santiago runs to the Treat&Train to get his reward.
The Treat&Train enables the experimenter to unambiguously communicate to Santiago that he has made the correct choice. States Hoffman, “because the Treat&Train delivers rewards in a predictable manner as long as the student presses the remote when Santiago touches the correct board, this makes it easy to incorporate students into the experiment without having to worry about how quickly they can fish a treat out of a pouch or pocket.”
You might wonder how Santiago knows to touch the square with his nose. He’s actually been trained to do this already. Hoffman explains, “Using the Treat&Train, I have trained Santiago to nose target foam boards that have a square on them and to refrain from nose targeting blank foam boards. He has been trained to nose target black squares on white backgrounds in addition to white squares on black backgrounds, and the squares vary in size. The square in the demo picture is the largest square we use, 1 square inch, and the smallest square with which we present Santiago is 1/16” of a square inch.
Because Santiago has learned he will be rewarded for touching a foam board with a square on it, Hoffman’s group can use the trained task to determine Santiago’s visual perceptual abilities.
“For instance,” she says, “we can look at whether he sees white squares on black backgrounds better than black squares on white backgrounds, and we can see how small of a square he can detect from a certain distance, such as 10–15 feet, away.
Of course, because this is an experiment, you don’t just run each trial once. Each of the four square sizes is presented to Santiago 20 times, 10 times with white squares on black backgrounds and 10 times with black squares on white backgrounds; 80 trials total. Santiago gets to take a break after every 10 trials. For each trial, we randomize whether the square is placed on the student’s left or right and the size of the square presented.”
Hoffman also points out that before they run any trials with Santiago, her students use the website dog-vision.com so they can get a better idea of how well Santiago may see and so they can predict how well he will perform.
To the general public, the experiment seems straightforward until you realize that it’s not likely Santiago is going to get 100% right on three sizes and then suddenly get zero correct on the size that’s too small for him to see from 10–15 feet away. Rather, as it becomes more difficult for him to see, then he’ll start choosing the correct side more like 50% of the time. Just like, on average, one gets tails on 50% of coin tosses just by chance, Santiago will likely approach the correct board 50% of the time just by chance. However, because it won’t be exactly 50%, students have to figure out how close to 50% his results need to be to indicate that he actually can’t see the square well enough to determine which target to touch.
That’s where statistics come in. Says Hoffman, “Students run Chi square tests to determine if Santiago is touching the correct board at higher than chance levels and to determine if his performance differs depending on the size or color of the squares.”
Students learn an essential lesson about study design flaws
Overall, it sounds pretty fun and the results were interesting and somewhat surprising too. “When we performed the experiment with Santiago in October, he did just as well detecting small squares as large squares and black squares as white squares,” reports Hoffman. “In fact, he touched the correct board on 78 out of 80 trials.”
However, there is a twist to this plot that throws the results into question and it’s an essential experimental design lesson for students to learn.
Says Hoffman, “We learned that Santiago basically cheats on the task because he doesn’t select which board to touch from his starting point 10–15 feet away from the boards. Instead, he waits until he looks at the boards from just inches away before deciding which to nose target.”
For 74 of the trials, Santiago approached the board on his left first, looked at it closely, and either touched it if it had the square on it or went to the board on his right, looked at it, and then touched that one with his nose. This confounding factor is exactly why researchers should always run practice trials and pilot studies first before they run a full study! It also illustrates the importance of having good observational skills. Researchers who are unable to recognize possible behavioral factors that might mess up their results could potentially publish findings or make conclusions that are incorrect—and that’s the worst offense a researcher can make—unless they later find their error and correct it in follow-up research experiments and publications.
So how would Hoffman’s students have to change the experiment to test what they originally meant to test?
Hoffman suggests, “Based on Santiago’s performance, we concluded that to determine what he can see from 10–15 feet away, we would have to refrain from reinforcing him unless he walks directly to the square and touches it. Alternatively, we could put a barrier (like a fence panel) between the two boards so that it is not so easy for him to visit both boards. Perhaps students in next year’s Research Methods class can try running the experiment with these modifications!”
Personally, I think they should leave the experiment as is so that students learn what can go wrong and then also revise the experiment to see if they can come up with something better. There’s nothing like making a costly mistake and then having to spend a lot of time fixing it to help you learn the value of careful planning in the first place!
To find out more about Dr. Hoffman and her behavior research at Canisius College, go to http://www3.canisius.edu/~hoffmanc. To learn more about Canisius’ Animal Behavior, Ecology and Conservation program, go to http://www.canisius.edu/abec/.