One of the problems of real time compressed sensing system is the computational cost of the reconstruction algorithms. It is especially problematic for close loop sensory applications where the sensory parameters needs to be constantly adjust to adapt to a dynamic scene. Through a preliminary experiment with MNIST dataset, we showed that we can extract some scene information (object recognition, scene movement direction and speed) based on the compressed samples using a deep convolutional neural network. It achieves 100% percent accuracy in distinguishing moving velocity, 96.22% in recognizing the digit and 90.04% in detecting moving direction after the code images are re-centered. Even though the classification accuracy drops slightly compared to using original videos, the computational speed is two time faster than classification on videos directly. This method also eliminates the need for sparse reconstruction prior to classification.