This code implements Superpixel-based HMP affordance detection as described in,

Affordance Detection of Tool Parts from Geometric Features
Austin Myers, Ching L. Teo, Cornelia Fermuller, and Yiannis Aloimonos
International Conference on Robotics and Automation (ICRA), 2015.

For details, please visit the project webpage:
http://www.umiacs.umd.edu/~amyers/part_affordance/

If you have any questions or find any bugs, please email amyers "at" umd.edu.

================================================================================
Overview
================================================================================
The code provides superpixel segmentation based labeling of RGB-D
images, using hierarchical sparse codes as features for each segment.
This code was designed to evaluate the effectiveness of different
feature combinations, and processes data in a batch fashion.
It is not optimized, and in particular the original depth filtering
and segmentation code is quite slow. You can replace these calls
with your own methods in order to speed things up.

================================================================================
Configuration
================================================================================
To test different feature combinations, feature architectures, and datasets,
there are several configuration files in "src/config_files/" which can be
customized. These files can be edited to change the list of object labels,
the paths to data and temporary directories, the amount of memory available for
dictionary learning, segmentation parameters, or classifier parameters.
Comments in the files describe the parameters further, but this readme will
go over changes needed to run basic experiments.

================================================================================
Comparing Feature Combinations and Feature Architectures
================================================================================
1. Modify the first function call in "config_compare_feats.m" to reflect
   your desired output directory name and dataset path.

   config = config_base('output_dir_name', ...
                        'path_to_dataset');

   You can also select different feature combinations and architectures
   to use in the experiments from this file.

2. Start matlab from the project directory and add the "src" directory and
   subdirectories to the search path.

   >> addpath(genpath('src'));

3. >> compare_feats();
   Running this function will compute the surface normals and curvatures,
   then train and test over the different splits, feature combinations,
   and feature architectures. So, you're all done! You just have to wait.

   Kendall's tau will be displayed to evaluate the learned models, and
   the test predictions will be saved so that you can use other evaluation
   methods as well.

================================================================================
Cornell Grasping Rectangle Dataset Experiments
================================================================================

After downloading the Cornell Grasping Rectangle Dataset, you will need to
prepare the data for recognition and detection. This will do all the necessary
cropping, resizing, and normal/curvature processing which will be saved to disk.

>> cornell_preprocess('cornell_data_path', 'processed_cornell_data_path', ...
                      'detection_results_path', 'recognition_results_path');

You will need to modify the "config_cornell_*" files to reflect these paths.

config = config_cornell_base('recognition_results_path', ...
                             'processed_cornell_data_path');

After preparing the data and updating the "config_cornell_recognition.m" file,
simply run,

>> cornell_recognition();
>> cornell_detection();

to reproduce the recognition and detection experiments respectively.
More details about these experiments can be found in the corresponding
subdirectory.
