• Login
    View Item 
    •   Home
    • Theses and Dissertations
    • Theses and Dissertations
    • View Item
    •   Home
    • Theses and Dissertations
    • Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of TUScholarShareCommunitiesDateAuthorsTitlesSubjectsGenresThis CollectionDateAuthorsTitlesSubjectsGenres

    My Account

    LoginRegister

    Help

    AboutPeoplePoliciesHelp for DepositorsData DepositFAQs

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Towards A Better Pose Understanding for Humanoid Robots

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    AlHami_temple_0225E_12549.pdf
    Size:
    12.29Mb
    Format:
    PDF
    Download
    Genre
    Thesis/Dissertation
    Date
    2016
    Author
    Al-Hami, Motaz Abdul Aziz
    Advisor
    Lakaemper, Rolf
    Committee member
    Lakaemper, Rolf
    Shi, Justin Y.
    Latecki, Longin
    Spence, Andrew J.
    Department
    Computer and Information Science
    Subject
    Computer Science
    Permanent link to this record
    http://hdl.handle.net/20.500.12613/664
    
    Metadata
    Show full item record
    DOI
    http://dx.doi.org/10.34944/dspace/646
    Abstract
    Towards A Better Pose Understanding for Humanoid Robots by Mo’taz Al-Hami Humanoid Robots have been showing a rapidly increasing ability to interact with their surrounding environment. A large spectrum of such interactions focuses on how robots can mimic human postures and posture related actions, like walking, grasping, standing and sitting on objects. In many cases the robot has a clear and well defined description of general postures related to a given task. The topic of this thesis focuses on exploring human poses of humanoid robots and in images. Such understanding and learning will help to understand 3D pose modeling, which can support humanoid robots in their interaction with the environment. In chapter one, we focus on generating physical poses for a NAO humanoid robot. To generate poses interactively, the poses should be controlled to satisfy any potential interaction with the environment. In this chapter, a simulated and real humanoid robot "NAO" is utilized to discover a fitness-based optimal sitting pose performed on various types of objects, varying in shape and height. Using an initial set of random valid sitting poses as the input generation, a genetic algorithm (GA) is applied to construct the fitness-based optimal pose for the robot to fit well on the object. The fitness criteria reflecting pose stability (i.e. how feasible the pose is based on real world physical limitation) converts poses into numerical stability level. The feasibility of the proposed approach is measured through a simulated environment using the V-Rep simulator. The real "NAO" robot performs the results generated by the simulation for real world evaluation. Next, in chapter two we focus on generating 3D pose models using only query keywords. In this chapter, we propose a self-motivated approach to learn 3D human pose conformation without using a priori knowledge. The proposed framework benefits from known 2D human pose estimators using still images and continue to build a sufficient approximate pose representing a group of images. With such approximation we can build an approximate 3D model representing this pose conformation. The proposed framework steps forward towards a self-motivated conceptual analysis and recognition in humanoid robots. The goal for this framework is to relate query keywords with 3D human poses. We evaluate our approach with different query keywords representing a specific human pose. The results confirm the ability to learn 3D human poses without a priori knowledge. Chapter three proposes a 3D analysis approach for 3D modeling. Our approach utilizes a human-pose based 3D shape context model for matching human-poses in 3D space, and filter them using a hierarchical binary clustering approach. The performance of this approach is evaluated with different query keywords. Recovering a 3D human-pose in form of an abstracted skeleton from a 2D image suffers from loss of depth information. Assuming the projected pose is represented by a set of 2D landmarks capturing the pose limbs, recovering back the original 3D locations is an ill posed problem. To recover a 3D configuration, camera localization in 3D space plays a major role, an inaccurate camera localization might mislead the recovery process. In Chapter four, we propose a 3D camera localization model using only human-pose appearance in a single 2D image (i.e. the set of 2D landmarks). We apply a supervised multi class logistic regression to assign the camera location in 3D space. In the learning process, we assume a set of predefined labeled camera locations. The features we train consist of relative length of limbs and 2D shape context. The goal is to build a relation between these projected landmarks and the camera location in 3D space. This kind of analysis allows us to reconstruct 3D poses based on the 2D projection only without any predefined camera parameters. We test our model on a set of real images showing a variety of camera locations.
    ADA compliance
    For Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu
    Collections
    Theses and Dissertations

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Temple University Libraries | 1900 N. 13th Street | Philadelphia, PA 19122
    (215) 204-8212 | scholarshare@temple.edu
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.