Show simple item record

dc.contributor.advisorLakaemper, Rolf
dc.creatorAl-Hami, Motaz Abdul Aziz
dc.date.accessioned2020-10-20T13:33:17Z
dc.date.available2020-10-20T13:33:17Z
dc.date.issued2016
dc.identifier.other958157184
dc.identifier.urihttp://hdl.handle.net/20.500.12613/664
dc.description.abstractTowards A Better Pose Understanding for Humanoid Robots by Mo’taz Al-Hami Humanoid Robots have been showing a rapidly increasing ability to interact with their surrounding environment. A large spectrum of such interactions focuses on how robots can mimic human postures and posture related actions, like walking, grasping, standing and sitting on objects. In many cases the robot has a clear and well defined description of general postures related to a given task. The topic of this thesis focuses on exploring human poses of humanoid robots and in images. Such understanding and learning will help to understand 3D pose modeling, which can support humanoid robots in their interaction with the environment. In chapter one, we focus on generating physical poses for a NAO humanoid robot. To generate poses interactively, the poses should be controlled to satisfy any potential interaction with the environment. In this chapter, a simulated and real humanoid robot "NAO" is utilized to discover a fitness-based optimal sitting pose performed on various types of objects, varying in shape and height. Using an initial set of random valid sitting poses as the input generation, a genetic algorithm (GA) is applied to construct the fitness-based optimal pose for the robot to fit well on the object. The fitness criteria reflecting pose stability (i.e. how feasible the pose is based on real world physical limitation) converts poses into numerical stability level. The feasibility of the proposed approach is measured through a simulated environment using the V-Rep simulator. The real "NAO" robot performs the results generated by the simulation for real world evaluation. Next, in chapter two we focus on generating 3D pose models using only query keywords. In this chapter, we propose a self-motivated approach to learn 3D human pose conformation without using a priori knowledge. The proposed framework benefits from known 2D human pose estimators using still images and continue to build a sufficient approximate pose representing a group of images. With such approximation we can build an approximate 3D model representing this pose conformation. The proposed framework steps forward towards a self-motivated conceptual analysis and recognition in humanoid robots. The goal for this framework is to relate query keywords with 3D human poses. We evaluate our approach with different query keywords representing a specific human pose. The results confirm the ability to learn 3D human poses without a priori knowledge. Chapter three proposes a 3D analysis approach for 3D modeling. Our approach utilizes a human-pose based 3D shape context model for matching human-poses in 3D space, and filter them using a hierarchical binary clustering approach. The performance of this approach is evaluated with different query keywords. Recovering a 3D human-pose in form of an abstracted skeleton from a 2D image suffers from loss of depth information. Assuming the projected pose is represented by a set of 2D landmarks capturing the pose limbs, recovering back the original 3D locations is an ill posed problem. To recover a 3D configuration, camera localization in 3D space plays a major role, an inaccurate camera localization might mislead the recovery process. In Chapter four, we propose a 3D camera localization model using only human-pose appearance in a single 2D image (i.e. the set of 2D landmarks). We apply a supervised multi class logistic regression to assign the camera location in 3D space. In the learning process, we assume a set of predefined labeled camera locations. The features we train consist of relative length of limbs and 2D shape context. The goal is to build a relation between these projected landmarks and the camera location in 3D space. This kind of analysis allows us to reconstruct 3D poses based on the 2D projection only without any predefined camera parameters. We test our model on a set of real images showing a variety of camera locations.
dc.format.extent99 pages
dc.language.isoeng
dc.publisherTemple University. Libraries
dc.relation.ispartofTheses and Dissertations
dc.rightsIN COPYRIGHT- This Rights Statement can be used for an Item that is in copyright. Using this statement implies that the organization making this Item available has determined that the Item is in copyright and either is the rights-holder, has obtained permission from the rights-holder(s) to make their Work(s) available, or makes the Item available under an exception or limitation to copyright (including Fair Use) that entitles it to make the Item available.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectComputer Science
dc.titleTowards A Better Pose Understanding for Humanoid Robots
dc.typeText
dc.type.genreThesis/Dissertation
dc.contributor.committeememberLakaemper, Rolf
dc.contributor.committeememberShi, Justin Y.
dc.contributor.committeememberLatecki, Longin
dc.contributor.committeememberSpence, Andrew J.
dc.description.departmentComputer and Information Science
dc.relation.doihttp://dx.doi.org/10.34944/dspace/646
dc.ada.noteFor Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu
dc.description.degreePh.D.
refterms.dateFOA2020-10-20T13:33:17Z


Files in this item

Thumbnail
Name:
AlHami_temple_0225E_12549.pdf
Size:
12.29Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record