• Login
    View Item 
    •   Home
    • Theses and Dissertations
    • Theses and Dissertations
    • View Item
    •   Home
    • Theses and Dissertations
    • Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of TUScholarShareCommunitiesDateAuthorsTitlesSubjectsGenresThis CollectionDateAuthorsTitlesSubjectsGenres

    My Account

    LoginRegister

    Help

    AboutPeoplePoliciesHelp for DepositorsData DepositFAQs

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Exploiting Competition Relationship for Robust Visual Recognition

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    TETDEDXDu-temple-0225E-12280.pdf
    Size:
    4.212Mb
    Format:
    PDF
    Download
    Genre
    Thesis/Dissertation
    Date
    2015
    Author
    DU, LIANG
    Advisor
    Ling, Haibin
    Committee member
    Latecki, Longin
    Shi, Yuan
    Zhu, Ying
    Department
    Computer and Information Science
    Subject
    Computer Science
    Information Science
    Information Technology
    Competition Relationship
    Face Analysis
    Visual Privacy Protection
    Visual Recognition
    Permanent link to this record
    http://hdl.handle.net/20.500.12613/2814
    
    Metadata
    Show full item record
    DOI
    http://dx.doi.org/10.34944/dspace/2796
    Abstract
    Leveraging task relatedness has been proven to be beneficial in many machine learning tasks. Extensive researches has been done to exploit task relatedness in various forms. A common assumption for the tasks is that they are intrinsically similar to each other. Based on this assumption, joint learning algorithms are usually implemented via some forms of information sharing. Various forms of information sharing have been proposed, such as shared hidden units of neural networks, common prior distribution in hierarchical Bayesian model, shared weak learners of a boosting classifier, distance metrics and a shared low rank structure for multiple tasks. However, another very common and important task relationship, i.e., task competition, has been largely overlooked. Task competition means that tasks are competing with each other if there are competitions or conflicts between their goals. Considering that tasks with competition relationship are universal, this dissertation is to accommodate this intuition from an algorithmic perspectives and apply the algorithms to various visual recognition problems. Focus on exploiting the task competition relationships in visual recognition, the dissertation presents three types of algorithms and applied them to different visual recognition tasks. First, hypothesis competition has been exploited in a boosting framework. The proposed algorithm CompBoost jointly model the target and auxiliary tasks with a generalized additive regression model regularized by competition constraints. This model treats the feature selection as the weak learner (\ie, base functions) selection problem, and thus provides a mechanism to improve feature filtering guided by task competition. More specifically, following a stepwise optimization scheme, we iteratively add a new weak learner that balances between the gain for the target task and the inhibition on the auxiliary ones. We call the proposed algorithm CompBoost, since it shares similar structures with the popular AdaBoost algorithm. In this dissertation, we use two test beds for evaluation of CompBoost: (1) content-independent writer identification by exploiting competing tasks of handwriting recognition, and (2) actor-independent facial expression recognition by exploiting competing tasks of face recognition. In the experiments for both applications, the approach demonstrates promising performance gains by exploiting the between-task competition relationship. Second, feature competition has been instantiated through an alternating coordinate gradient algorithm. Sharing the same feature pool, two tasks are modeled together in a joint loss framework, with feature interaction encouraged via an orthogonal regularization over feature importance vectors. Then, an alternating greedy coordinate descent learning algorithm (AGCD) is derived to estimate the model. The algorithm effectively excludes distracting features in a fine-grained level for improving face verification. In other words, the proposed algorithm does not forbid feature sharing between competing tasks in a macro level; it instead selectively inhibits distracting features while preserving discriminative ones. For evaluation, the proposed algorithm is applied to two widely tested face-aging benchmark datasets: FG-Net and MORPH. On both datasets, our algorithm achieves very promising performances and outperforms all previously reported results. These experiments, together with detailed experimental analysis, show clearly the benefit of coordinating conflicting tasks for improving visual recognition. Third, two ad-hoc feature competition algorithms have been proposed to apply to visual privacy protection problems. Visual privacy protection problem is a practical case of competition factors in real world application. Algorithms are specially designed to achieve best balance between competing factors in visual privacy protection based on different modeling frameworks. Two algorithms are developed to apply to two applications, license plate de-identification and face de-identification.
    ADA compliance
    For Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu
    Collections
    Theses and Dissertations

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Temple University Libraries | 1900 N. 13th Street | Philadelphia, PA 19122
    (215) 204-8212 | scholarshare@temple.edu
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.