A "detector" is not necessarily a full detection algorithm; it may just be a constraint checker. So this takes in some list of regions with probabilities that there is a face-- to the best knowledge of the previous filter. So initially we'll have every region in the image with probability 1, and will cut that down in each stage based on the detector/constraint set.
Definition at line 50 of file ConstrainedDetector.H.
Public Member Functions
|virtual||ObjectDetector (std::string operatorName)=0|
|virtual std::vector< ProbabilityRegion >||filterRegions (std::vector< ProbabilityRegion > ®ions, RoleImage *image)=0|
|This function will take in a vector of possible image regions and check for possible target objects. |
|std::vector< RobotObjects::RobotObject * > *||getConsumers ()|
|std::vector< RobotObjects::RobotObject * >||m_consumers|
|virtual std::vector<ProbabilityRegion> filterRegions||(||std::vector< ProbabilityRegion > &||regions,|
This function will take in a vector of possible image regions and check for possible target objects.
Note that a probability of 1 does not mean there is definitely a face there-- it just means that there could be a face in a region, as best as this detector can tell. We don't expect to have a reasonable value until late in the chain when we run real detectors rather than constraints.
|std::vector<RobotObjects::RobotObject *>* getConsumers||(||)||
|std::vector<RobotObjects::RobotObject *> m_consumers