Operant Conditioning Learning
B.F. Skinner projected his theory on operant conditioning by conducting numerous experiments on animals. He used a special box known as “Skinner Box” for his study on rats.
According to the experiment, a hungry rat was placed inside the Skinner box, where the rat was inactive, but gradually as it began to adapt to the environment of the box, it began to explore around. Sooner or later, the rat observed a lever, upon pressing which; food was released inside the box. After it filled its hunger, it pressed the lever for the second time as it grew hungry again. The phenomena sustained for the third, fourth and the fifth time, and after a while, the hungry rat immediately pressed the lever once it was placed in the box. Then the conditioning was deemed to be complete.
Here, the action of pressing the lever is an operant response/ behavior, and the food release inside the chamber is the reward. The experiment is also called Instrumental Conditioning Learning as the response is instrumental in getting food.
This experiment also explains the role of positive reinforcement as the hungry rat was provided with the food on pressing the lever, it’s a positive reinforcement.
Skinner conducted another experiment in order to explain negative reinforcement. Skinner placed a rat in the chamber in the similar manner, but instead of keeping it hungry, he subjected the chamber to an unpleasant electric current. The rat having experienced the uneasiness started badly moving around the box and accidentally bashed the lever. Pressing of the lever straightaway seized the flow of unpleasant current. Thus, the electric current reacted as the negative reinforcement, and the consequence of escaping the electric current made sure that the rat repeated the action again and again.
Both the experiment clearly explains the working of operant conditioning. The most important part in any operant conditioning learning is to recognize the operant behavior and consequence resulted in that particular environment.