@@ -38,7 +38,7 @@ Now you can run the scripts in the data folder to move the videos to the appropr
Before you can run Methods #4 and #5, you need to extract features from the images with the CNN. This is done by running `extract_features.py`. On my Dell with a GeFore 960m GPU, this takes about 8 hours. If you want to limit to just the first N classes, you can set that option in the file.
## Running models
## Training models
The CNN-only method (method #1 in the blog post) is run from `train_cnn.py`.
...
...
@@ -46,6 +46,10 @@ The rest of the models are run from `train.py`. There are configuration options
The models are all defined in `models.py`. Reference that file to see which models you are able to run in `train.py`.
## Demo/Using models
I have not yet implemented a demo where you can pass a video file to a model and get a prediction. Pull requests are welcome if you'd like to help out!
### UCF101 Citation
Khurram Soomro, Amir Roshan Zamir and Mubarak Shah, UCF101: A Dataset of 101 Human Action Classes From Videos in The Wild., CRCV-TR-12-01, November, 2012.