• A Close-Space Sublimation Driven Pathway for the Manipulation of Substrate-Supported Micro- and Nanostructures

      Neretina, Svetlana; Cohen, Richard; Hughes, Robert; Sheffield, Joel B.; Zhang, Huichun (Judy) (Temple University. Libraries, 2014)
      The ability to fabricate structures and engineer materials on the nanoscale leads to the development of new devices and the study of exciting phenomena. Nanostructures attached to the surface of a substrate, in a manner that renders them immobile, have numerous potential applications in a diverse number of areas. Substrate-supported nanostructures can be fabricated using numerous modalities; however the easiest and most inexpensive technique to create a large area of randomly distributed particles is by the technique of thermal dewetting. In this process a metastable thin film is deposited at room temperature and heated, causing the film to lower its surface energy by agglomerating into droplet-like nanostructures. The main drawbacks of nanostructure fabrication via this technique are the substantial size distributions realized and the lack of control over nanostructure placement. In this doctoral dissertation, a new pathway for imposing order onto the thermal dewetting process and for manipulating the size, placement, shape and composition of preformed templates is described. It sees the confinement of substrate-supported thin films or nanostructure templates by the free surface of a metal film or a second substrate surface. Confining the templates in this manner and heating them to elevated temperatures leads to changes in the characteristics of the nanostructures formed. Three different modalities are demonstrated which alters the preformed structures by: (i) subtracting atoms from the templates, (ii) adding atoms to the template or (iii) simultaneously adding and subtracting atoms. The ability to carry out such processes depends on the choice of the confining surface and the nanostructured templates used. A subtractive process occurs when an electroformed nickel mesh is placed in conformal contact with a continuous gold film while it dewets, resulting in the formation of a periodic array of gold microstructures on an oxide substrate surface. When heated the gold beneath the grid selectively attaches to it due to the surface energy gradient which drives gold from the low surface energy oxide surface to the higher surface energy nickel mesh. With this process being confined to areas adjacent to and in contact with the grid surface the film ruptures at well-defined locations to form isolated islands of gold and subsequently, a periodic array of microstructures. The process can be carried out on substrates of different crystallographic orientations leading to nanostructures which are formed epitaxially and have orientations based on underlying substrate orientations. The process can be extended by placing a metallic foil of Pt or Ni over preformed templates, in which case a reduction in the size of the initial structures is observed. Placing a foil on structures with random placement and a wide size distribution results, not only in a size reduction, but also a narrowed size distribution. Additive processes are carried out by using materials which possess high vapor pressures much below the sublimation temperature of the template materials. In this case a germanium substrate was used as a source of germanium adatoms while gold or silver nanostructures were used as heterogeneous nucleation sites. At elevated temperatures the adatoms collect in sufficient quantities to transform each site into a liquid alloy which, upon cooling, phase separates into elemental components sharing a common interface and, hence, resulting in the formation of heterodimers and hollowed metal nanocrescents upon etching away the Ge. A process which combined aspects of the additive and subtractive process was carried out by using a metallic foil with a high vapor pressure and higher surface energy than the substrate surface (in this case Pd foil). This process resulted in the initial preformed gold templates being annihilated and replaced by nanostructures of palladium, thereby altering their chemical composition. The assembly process relies on the concurrent sublimation of palladium and gold which results in the complete transfer of the templated gold from the substrate to the foil, but not before the templates act as heterogeneous nucleation sites for palladium adatoms arriving to the substrate surface. Thus, the process is not only subtractive, but also additive due to the addition of palladium and removal of gold.
    • A Collection of Ten Schubert Songs Transcribed for the Piano

      Abramovic, Charles; Anderson, Christine L.; Wedeen, Harvey D.; Brodhead, Richard, 1947- (Temple University. Libraries, 2013)
      A Collection of Ten Schubert Songs Transcribed for the Piano Yoni Levyatov Doctor of Musical Arts Temple University, 2012 Doctoral Advisor Committee Chair: Dr. Charles Abramovic The objective of this project is the creation of a collection of ten songs by Franz Schubert, freely transcribed for solo piano: 1) "Gute Nacht" D.911 2) "An die Laute" D.905 3) "Memnon" D.541 4) "Wilkommen und Abschied" D.767 5) "Greisengesang" D.778 6) "Todesmusik" D.758 7) "Der Goldschmiedsgesell" D.560 8) "Das Fischermädchen" D.957 9) "Das Lied im Grünen" D.917 10) "Der Strom" D.565 These are written bearing in mind the general history of piano transcriptions as originated (in the modern sense) by Franz Liszt, the greatest of all transcribers for the piano, as well as other figures, such as Feruccio Busoni and Leopold Godowsky. The primary reason behind Liszt's idea of transcribing several Schubert song cycles, as well as singular lieder, was to popularize those works, and thus make them available to the common 19th century amateur, who was quite at home at the piano and usually wished to be able to reproduce favorite pieces with his own two hands. or in a four-hand collaboration. Liszt was also interested in raising awareness of these works, which were less popular and treated less seriously than they are today. At present such transcriptions may be done not only for the reasons described above, but also as a modern stylization, utilizing the many pianistic and compositional devices that have emerged since the 19th century. Some of the songs transcribed in my collection deviate considerably from the original accompaniment textures. Schubert, particularly in the strophic songs, tends to use consistent figures that hardly change with each repetition. The transcription medium allows an expansion of that, as well as use of many colorful and expressive pianistic idioms to reflect the text and the different stages of plot development. A rather extreme example of transcription taken to the point of re-composition that I discuss and use as a reference is the Schubert's Winterreise-Eine komponierte Interpretation (1993) by Hans Zender - a piece in which the composer explores the textual and musical possibilities in orchestrating his interpretation of the score for a tenor and a small orchestra. A similar treatment of Schubert, but for solo piano, as opposed to an orchestra, is one of the novelties in this project. The song that has been transcribed with a particular reference to Zender's work is Gute Nacht, which opens both Zender's and Liszt's transcriptions, following Schubert's original. Another significant point of reference in this project is taken from Liszt's transcription of "Das Fischermädchen" " from the "Schwanengesang" D.957. It is one of only two lieder in my collection also existing in another transcription which has been a part of the common piano repertoire. Liszt's treatment stands somewhat in between what would be a more common and literal transcription of the day in the 1840s and Zender's psychological re-composition and my transcription pays an homage to that. The implications of this project may bring a wider recognition to the validity of the transcription genre, and an expansion of the modern piano transcriptions repertoire, which is somewhat limited due to its unpopularity among current composers.
    • A Combination Optical and Electrical Nerve Cuff for Rat Peripheral Nerve

      Spence, Andrew J.; Lemay, Michel A.; Patil, Chetan Appasaheb (Temple University. Libraries, 2019)
      Spinal cord injury results in life-long damage to sensory and motor functions. Recovery from these injuries is limited and often insufficient because the lack of stimulation from supraspinal systems results in further atrophy of the damaged neural pathways. Current studies have shown that repeated sensory activity obtained by applying stimulation enhances plasticity of neural circuits, and in turn increases the ability to create new pathways able to compensate for the damaged neurons. Functional electrical stimulation has been proven to show success in this form of rehabilitation, but it has its limitations. Stimulating neural pathways with electricity results in also stimulating surrounding neurons and muscle tissue. This results in attenuation of the intended effect. The use of optogenetics mitigates this issue, but comes with its own complications. Optogenetics is a growing method of neural stimulation which utilizes genetic modification to create light activated ion channels in neurons to allow for activation or suppression of neural pathways. In order to activate the neurons, light of the appropriate wavelength must be able to penetrate the nerves. Applying the light transcutaneously is insufficient, as the skin and muscle tissue attenuate the signal. The target nerve may also move relative to an external point on the body, creating further inconsistency. Specifically in the case of using a rat model, an external object will be immediately removed by the animal. This thesis seeks to address this issue for a rat model by designing a nerve cuff capable of both optical and electrical stimulation. This device will be scaled to fit the sciatic nerve of a rat and allow for both optical activation and inhibition of the neural activity. It will be wired such that each stimulus may be operated individually or in conjunction with each other. The simultaneous stimulation is required in order to validate the neural inhibition facet. The circuit itself will be validated through the use of an optical stimulation rig, using a photoreceptor in place of an EMG. The application of the cuff will be verified in a live naive rat. Aim 1: Design and build an implantable electrical stimulation nerve cuff for the sciatic nerve of rats. An electrical nerve cuff for the sciatic nerve of a rat will be designed and assembled such that it is able to reliably activate the H-reflex. For it to be used in a walking rat, the cuff must be compatible with a head mount in order to prevent the rat from being able to chew at the wiring or their exit point. The cuff will be controlled through a Matlab program that is able to output specified signals and compare these outputs directly with the resultant EMG inputs. Aim 2: Implement LEDs onto the cuff and perform validation experiments. Light delivery capability will be added to the cuff through the use of LEDs. The functionality of the cuff will be validated through tests on naive rats. If successful, only an electric stimulation will result in a muscle twitch. An optical stimulation should result in no twitches, which would then validate that no current is leaking from the nerve cuff, given that the rat does not express any light sensitive protein channels. Ultimately, with a rat expressing ChR2 opsins on the sciatic nerve, an activation of the nerve using a blue light of wavelength 470nm will result in activating an h-wave without an m-wave when optically stimulated. Similarly, using the nerve cuff with a rat expressing ArchT opsins will result in suppressing the h-wave from an electric stimulation once the sciatic nerve is illuminated with green light of a wavelength of 520 nm.
    • A Comparative Analysis of Bayesian Nonparametric Variational Inference Algorithms for Speech Recognition

      Picone, Joseph; Obeid, Iyad, 1975-; Won, Chang-Hee, 1967-; Yates, Alexander; Sobel, Marc J. (Temple University. Libraries, 2013)
      Nonparametric Bayesian models have become increasingly popular in speech recognition tasks such as language and acoustic modeling due to their ability to discover underlying structure in an iterative manner. These methods do not require a priori assumptions about the structure of the data, such as the number of mixture components, and can learn this structure directly. Dirichlet process mixtures (DPMs) are a widely used nonparametric Bayesian method which can be used as priors to determine an optimal number of mixture components and their respective weights in a Gaussian mixture model (GMM). Because DPMs potentially require an infinite number of parameters, inference algorithms are needed to make posterior calculations tractable. The focus of this work is an evaluation of three of these Bayesian variational inference algorithms which have only recently become computationally viable: Accelerated Variational Dirichlet Process Mixtures (AVDPM), Collapsed Variational Stick Breaking (CVSB), and Collapsed Dirichlet Priors (CDP). To eliminate other effects on performance such as language models, a phoneme classification task is chosen to more clearly assess the viability of these algorithms for acoustic modeling. Evaluations were conducted on the CALLHOME English and Mandarin corpora, consisting of two languages that, from a human perspective, are phonologically very different. It is shown in this work that these inference algorithms yield error rates comparable to a baseline Gaussian mixture model (GMM) but with a factor of up to 20 fewer mixture components. AVDPM is shown to be the most attractive choice because it delivers the most compact models and is computationally efficient, enabling its application to big data problems.

      Shapiro, Joan Poliner; Gross, Steven Jay; DuCette, Joseph P. (Temple University. Libraries, 2014)
      Charter school expansion is on the forefront of educational reform. There is currently little research on what issues charter school organizations face when they expand, how specific organizational structures are implemented during a charter school expansion process, and which structures provide a favorable outcome of the expansion. The overall goal of this study was an in-depth analysis of two expanding charter schools. This qualitative two-site case study examined several select issues that charter schools face during expansion, with the goal of identifying differences in approach, and evaluating outcomes of the expansion in the light of these differences. Two urban charter school organizations within the same city were chosen for this case study. The following are the four specific research questions addressed: 1) What issues did the selected charter school organizations face when they were expanding? 2) What type of organizational system did the charter schools have and how did that system facilitate their expansion? 3) How was information communicated during the Charter School Organizations' expansion? 4) How did the selected charter school organizations handle heightened turbulence during the expansion period? The primary sources were: 1) data obtained through interviews with three school administrators within each organization; and, 2) data collected via questionnaires in order to determine administrator's approaches to decision making, strategic plans, and communication flow within each organization. The data were analyzed and the research reflects an in-depth analysis of the varying level of turbulence experienced by each charter school organization including factors and decisions that impacted each organization's expansion process. The findings indicate that there are a variety of internal factors and external obstacles that charter school organizations must consider and ultimately overcome before and during a charter school organizational expansion. The results of these findings suggest that each organization experienced varying levels of turbulence when expanding due to a multitude of factors including relationships with stakeholders, community support, school performance, as well as the availability of resources including students, facilities, finances, and staff. Ironically, the levels of turbulence experienced by each charter school organization were quite different given the variety of factors that impacted each charter school organization's expansion. Additionally, there were only a few areas in which each charter school organization experienced similar levels of turbulence to one another. These findings indicate that while at times each charter school organization may have faced different levels of turbulence, given a variety of internal and external factors, it did not appear that these varying levels of turbulence prevented either charter school organization from expanding. Furthermore, the degree of turbulence experienced by different individuals within iv each charter school organization, based upon their positionality, was influenced by a multitude of factors that are both controllable and uncontrollable. These factors that impact the level of turbulence experienced by each organization include the organizational structure, stakeholder involvement, and the flow of communication. The benefit of this study is to better understand the variety of factors both internal and external that influence and contribute to a charter school expansion and to better understand the varying degrees of turbulence experienced by all stakeholders involved in a charter school while the organization is expanding. The results of this study provide insight regarding varying factors charter school organizations should consider when expanding and how decisions are made and communicated to all stakeholders while simultaneously considering the impact these decisions have on all individuals.

      Houck, Nöel, 1942-; Beglar, David J.; Childs, Marshall; Tatsuki, Donna Hurst; Ishihara, Noriko (Temple University. Libraries, 2010)
      A small but important set of studies on complaint speech acts have been focused on certain aspects of native speaker (NS) and non-native speaker (NNS) complaints such as strategy use and native speaker judgment, (Du, 1995; House & Kasper, 1981; Morrow, 1995; Murphy & Neu, 1996; Olshtein & Weinbach, 1987; Trosborg, 1995). However, few researchers have comprehensively researched complaint interactions. Complaining to the person responsible for the complainable (as opposed to complaining about a third party or situation) is a particularly face-threatening speech act, with social norms that vary from culture to culture. This study was an investigation of how Japanese and Americans express their dissatisfaction to those who caused it in their native language and in the target language (Japanese or English). The data analyzed are from the role-play performances of four situations by ten dyads in each of four groups (native speakers of Japanese speaking Japanese to a Japanese (JJJ), native speakers of English speaking English to an American (EEE), native speakers of Japanese speaking English to a native speaker of English (JEE), and native speakers of English speaking Japanese to a native speaker of Japanese (EJJ). The complaint categories used in this study represent a pared-down version of Trosborg's (1995) categories based on two criteria: (a) hinting or mentioning complainable and (b) negative assessment of the complainer's action or of the complainer as a person. The following characteristics of the complaint interactions were analyzed: (a) the length of interactions in terms of the number of turns, (b) complaint strategies used by complainers, (c) initial complaint strategies used by complainers, (d) the comparison of S1Hint and S2Cmpl as the initial position, (e) interaction flow in terms of complaint severity levels, 6) strategies employed by complainees, and (f) flow of complaint interactions between complainers and complainees. The results indicate some differences between the groups of native speakers of English and Japanese in the length of their interactions and the use of strategies by complainers and complainees. In general, complaint sequences in English were shorter, and the complaint strategies used by the JJJ group were less indirect than those used by the EEE group. Several prototypical complaint sequences are described. Concerning the use of strategies, the JEE and EJJ groups used strategies more in line with those employed by target language speakers, rather than by speakers of their own language. An attempt is made to account for the different characteristics of English and Japanese complaints in terms of linguistic resources. Pedagogical implications are also highlighted.
    • A Comparative Study of How Career Services Staff Responds To Students' Employment Search

      Davis, James Earl, 1960-; Caldwell, Corrinne A.; DuCette, Joseph P.; Ikpa, Vivian W. (Temple University. Libraries, 2014)
      Every year thousands of college graduates seek employment. In preparing for a career, many students turn to the Office of Career Services for assistance since it is a resource that they can use in their job searches as they navigate through an increasingly tight job market. Despite the obvious importance of Career Services in higher education, not enough is known about how these offices work and how they utilize the various resources available to them in assisting graduates to find employment. The core purpose of the present study is to fill this gap in the literature. This qualitative case study compared the activities of the Office of Career Services at two institutions of higher education (St. Peter and St. Thomas will be the names used throughout this dissertation). While both institutions are Jesuit, they differ in a number of ways that allowed meaningful comparisons about how the staff members in the Office of Career Services responded to the needs of undergraduate students in their employment searches. Data were collected through open-ended, semi-structured interviews of critical members of the staffs of both institutions. The interviews focused on how staff members provide services to their students and alumni as well as to the employers of these alumni. The study attempted to understand the formal and informal processes used by the Office of Career Services at these two universities as a measure of the institutions' organizational culture (Tierney, 1988). In addition, the study examined how the staff of the Office of Career Services develop and maintain connections to the academic community and to local and national businesses. The results of the study indicate that the Career Services staff members at these two universities informed students early in their academic careers of the services afforded them in preparing for their job searches. Both offices are focused on their students, but believe they are under-utilized by the students. St. Peter's has an advantage with employment opportunities for students due to its location. St. Thomas has a stronger relationship with the institution's academic community. The implications of these results for career services in general were discussed.
    • A Comparison of Adaptive Behavior Skills and IQ in Three Populations: Children with Learning Disabilities, Mental Retardation, and Autism

      Fiorello, Catherine A.; DuCette, Joseph P.; Rosenfeld, Joseph G.; Rotheram-Fuller, Erin; Farley, Frank (Temple University. Libraries, 2009)
      Adaptive Behavior skills are the conceptual, social, and practical skills that individuals learn to be able to function in their everyday lives (AAIDD, 2008). Measuring adaptive behavior is a way to summarize the effectiveness with which individuals meet the standards of personal independence and social responsibility expected for their age and cultural group. This paper discusses the history and development of adaptive behavior as a construct, its measurement, and its relationship to intelligence. Previous research has examined the relationship between adaptive and intellectual functioning; this study investigates adaptive performance among children with disabilities while controlling for the influence of intellectual level. Children with autism, specific learning disabilities, and mental retardation were studied to determine how they fared in the adaptive subdomains of communication, socialization, and activities of daily living. Data for the study were gathered by reviewing archives from special education records in a large, urban school district. Results indicated a positive and moderate relationship between intelligence and adaptive behavior, but only in the autism group. The groups differed in their performance on the subdomains of adaptive behavior; however, the pattern of adaptive skills for each diagnostic group was unique. Children with autism were found to have deficits in socialization, children with learning disabilities were found to have deficits in communication, and children with mental retardation showed deficits in all domains. These patterns held up even when IQ was controlled; however, the groups no longer differed on communication skills, suggesting that IQ is most strongly related to communication. Finally, the study revealed that full scale IQ, activities of daily living, and communication skills discriminate mental retardation from the other groups while socialization skills discriminate autism from the other groups. Implications of these findings are discussed relative to assessment practices, differential diagnosis, program development, and progress monitoring.
    • A Comparison of Clustering Algorithms for the Study of Antibody Loop Structures

      Obradovic, Zoran; Dragut, Eduard Constantin; Vucetic, Slobodan; Zeng, Qiang; Dunbrack, Roland L. (Temple University. Libraries, 2017)
      Antibodies are the fundamental agents of the immune system. The CDRs, or Complementarity Determining Regions act as the functional surfaces in binding antibodies to their targets. These CDR structures, which are peptide loops, are diverse in both amino acid sequence and structure. In 2011, we surveyed a database of CDR loop structures using the affinity propagation clustering algorithm of Frey and Dueck. With the growth of the number of structures deposited in the Protein Data Bank, the number of antibody CDRs has approximately tripled. In addition, although the affinity clustering in 2011 was successful in many ways, the methods used left too much noise in the data, and the affinity clustering algorithm tended to clump diverse structures together. This work revisits the antibody CDR clustering problem and uses five different clustering algorithms to categorize the data. Three of the clustering algorithms use DBSCAN but differ in the data comparison functions used. One uses the sum of the dihedral distances, while another uses the supremum of the dihedral distances, and the third uses the Jarvis-Patrick shared nearest neighbor similarity, where the nearest neighbor lists are compiled using the sum of the dihedral distances. The other two clustering methods use the k-medoids algorithm, one of which has been modified to include the use of pairwise constraints. Overall, the DBSCAN using the sum of dihedral distances and the supremum of the dihedral distances produced the best clustering results as measured by the average silhouette coefficient, while the constrained k-medoids clustering algorithm had the worst clustering results overall.
    • A comparison of language sample elicitation methods for dual language learners

      Reilly, Jamie; Reich, Jodi; García, Felicidad M. (Temple University. Libraries, 2017)
      Language sample analysis has come to be considered the “gold standard” approach for cross-cultural language assessment. Speech-language pathologists assessing individuals of multicultural or multilinguistic backgrounds have been recommended to utilize this approach in these evaluations (e.g., Pearson, Jackson, & Wu, 2014; Heilmann & Westerveld, 2013). Language samples can be elicited with a variety of different tasks, and selection of a specific method by SLPs is often a major part of the assessment process. The present study aims to facilitate the selection of sample elicitation methods by identifying the method that elicits a maximal performance of language abilities and variation in children’s oral language samples. Analyses were performed on Play, Tell, and Retell methods across 178 total samples and it was found that Retell elicited higher measures of syntactic complexity (i.e., TTR, SI, MLUw) than Play as well as a higher TTR (i.e., lexical diversity) and SI (i.e., clausal density) than Tell; however, no difference was found between Tell and Retell for MLUw (i.e., syntactic complexity/productivity), nor was there a difference found between Tell and Play for TTR. Additionally, it was found that the two narrative methods elicited higher DDM (i.e., frequency of dialectal variation) than the Play method. No significant difference was found between Tell and Retell for DDM. Implications for the continued use of language sample for assessment of speech and language are discussed.
    • A Comparison of Teachers' and School Psychologists' Perceptions of the Cognitive Abilities Underlying Basic Academic Tasks

      Fiorello, Catherine A.; Thurman, S. Kenneth; DuCette, Joseph P.; Rosenfeld, Joseph G.; Farley, Frank (Temple University. Libraries, 2008)
      The Cattell-Horn-Carroll Theory of cognitive functioning is a well-validated framework for intelligence. Cross-battery assessment is a means utilizing CHC theory in practice. School psychologists write recommendations with the assumption that teachers understand the cognitive abilities underlying basic academic tasks in the same way. Theoretically, the more similar the understanding of these two groups, the greater the likelihood of appropriate referrals and intervention fidelity. Teacher perceptions of their students' cognitive abilities impact the referrals that they make and intervention strategies that they implement. In this study, teachers and school psychologists were asked to sort basic academic tasks into the CHC broad abilities. The central research questions being asked are as follows: Are school psychologists and teachers equally proficient at identifying the broad cognitive ability demands of a basic academic task? How do the responses of the participants compare to the theoretical model presented? Do teachers and school psychologists become better at identifying the cognitive demands of a task with experience or higher levels of training? In order to answer the first research question, MANOVAs were performed. There was a significant overall difference between groups on their responses. While teachers and school psychologists differed significantly on five of the eight CHC broad ability scales. School psychologists were only significantly better at consistently identifying the basic academic tasks that utilized Fluid Reasoning. To answer the second research question, principal components factor analysis was performed. The factors created displayed limited similarity to the theoretical factors. Pearson correlations between the theoretical factors and the factors created through factor analysis revealed multiple positive correlations that accounted for more than 10% of the variance. The theoretical scales that were more significantly correlated were Fluid Reasoning, Auditory Processing, and Processing Speed. To answer the third research question, Pearson correlations were calculated. This analysis revealed that neither group develops a better understanding of the cognitive abilities required to perform academic tasks with experience. Level of education is not related to accuracy for teachers on any of the items. Level of education is significantly correlated with accuracy in identifying tasks that require Visual Processing for school psychologists.

      Gross, Steven Jay; Ikpa, Vivian W.; DuCette, Joseph P.; Davis, James Earl, 1960-; Stahler, Gerald (Temple University. Libraries, 2010)
      With a focus on leadership, this study examines the leadership characteristics of principals in schools that are recognized as National Blue Ribbon Schools by the United States Department of Education. This mixed methodology study utilizes the causal comparative method to compare what teachers consider to be effective leadership characteristics of principals in National Blue Ribbon Schools to those of principals in matched sets of selected Non-Blue Ribbon Schools in Pennsylvania. The Audit of Principal Effectiveness is used to collect quantitative data and a survey protocol is used to identify confounding factors and extraneous variables. The research revealed significant findings in nearly all areas of the Audit of Principal Effectiveness. Principals in the selected matched-set schools were ranked higher than principals in National Blue Ribbon Schools. Additional analysis using a multiple regression showed that teachers perceive their principal as effective if the principal has good relations with them, employs and evaluates staff effectively, has high expectations, and does not exceedingly involve the community in the life of the school.

      Dames, Philip; Jacobs, Daniel A.; Peridier, Vallorie J. (Temple University. Libraries, 2019)
      Robotic technology is advancing out of the laboratory and into the everyday world. This world is less ordered than the laboratory and requires an increased ability to identify, target, and track objects of importance. The Bayes filter is the ideal algorithm for tracking a single target and there exists a significant body of work detailing tractable approximations of it with the notable examples of the Kalman and Extended Kalman filter. Multiple target tracking also relies on a similar principle and the Kalman and Extended Kalman filter have multi-target implementations as well. Other method include the PHD filter and Multiple Hypothesis tracker. One issue is that these methods were formulated to only track one classification of target. With the increased need for robust perception, there exists a need to develop a target tracking algorithm that is capable of identifying and tracking targets of multiple classifications. This thesis examines two of these methods: the Probability Hypothesis Density (PHD) filter and the Multiple Hypothesis Tracker (MHT). A Matlab-based simulation of an office floor plan is developed and a simulation UGV equipped with a camera is set the task of navigating the floor plan and identifying targets. Results of these experiments indicated that both methods are mathematically capable of achieving this. However, there was a significant reliance on post-processing to verify the performance of each algorithm and filter out noisy sensor inputs indicating that specific multi-target multi-class implementations of each algorithm should be implemented with a detailed and more accurate sensor model.

      Rams, Thomas E.; Suzuki, Jon, 1947-; Whitaker, Eugene J. (Temple University. Libraries, 2013)
      Objectives: Systemic antibiotics are generally recognized as providing a beneficial impact in treatment of both aggressive and chronic periodontitis. Since strains of periodontal pathogens among periodontitis patients may vary in their antibiotic drug resistance, the American Academy of Periodontology recommends antimicrobial susceptibility testing of suspected periodontal pathogens prior to administration of systemic periodontal antibiotic therapy, to reduce the risk of a treatment failure due to pathogen antibiotic resistance. E-test and MIC Test Strip assays are two in vitro antimicrobial susceptibility testing systems employing plastic- and paper-based, respectively, carriers loaded with predefined antibiotic gradients covering 15 two-fold dilutions. To date, no performance evaluations have been carried out comparing the Etest and MIC Test Strip assays in their ability to assess the in vitro antimicrobial susceptibility of periodontal bacterial pathogens. As a result, the purpose of this study was to compare the in vitro performance of E-test and MIC Test Strip assays in assessing minimal inhibitory concentration (MIC) values of four antibiotics frequently utilized in systemic periodontal antibiotic therapy against 11 fresh clinical subgingival isolates of the putative periodontal pathogen, Prevotella intermedia/ nigrescens, and to compare the distribution of P. intermedia/ nigrescens strains identified with interpretative criteria as "susceptible" and "resistant" to each of the four antibiotics using MIC values determined by the two antimicrobial susceptibility testing methods. Methods: Standardized cell suspensions, equivalent to a 2.0 McFarland turbidity standard, were prepared with 11 fresh clinical isolates of P. intermedia/nigrescens, each recovered from the subgingival microbiota of United States chronic periodontitis subjects, and plated onto to the surfaces of culture plates containing enriched Brucella blood agar. After drying, pairs of antibiotic-impregnated, quantitative, gradient diffusion strips from two manufacturers (E-test, bioMérieux, Durham, NC, USA, and MIC Test Strip, Liofilchem s.r.l., Roseto degli Abruzzi, Italy) for amoxicillin, clindamycin, metronidazole, and doxycycline were each placed apart from each other onto the inoculated enriched Brucella blood agar surfaces, so that an antibiotic test strip from each manufacturer was employed per plate against each P. intermedia/ nigrescens clinical isolate for antibiotic susceptibility testing. After 48-72 hours anaerobic jar incubation, individual MIC values for each antibiotic test strip against P. intermedia/nigrescens were read in μg/ml at the point where the edge of the bacterial inhibition ellipse intersected with the antibiotic test strip. MIC50, MIC90, and MIC range were calculated and compared for each of the test antibiotics, with essential agreement (EA) values determined per test antibiotic for the level of outcome agreement between two antimicrobial susceptibility testing methods. In addition, the identification of antibiotic "susceptible" and "resistant" strains among the P. intermedia/nigrescens clinical isolates was determined for each test antibiotic using MIC interpretative criteria from the MIC interpretative standards developed by the European Committee on Antimicrobial Susceptibility Testing (EUCAST) for gram-negative anaerobic bacteria for amoxicillin, clindamycin, and metronidazole findings, and from the French Society of Microbiology breakpoint values for anaerobic disk diffusion testing for doxycycline data. Results: For amoxicillin, higher MIC50 and MIC90 values against the P. intermedia/ nigrescens strains were found with the MIC Test Strip assay than with E-test strips, resulting in a relatively low EA value of 45.5% between the two susceptibility testing methods. A higher percentage of amoxicillin "resistant" P. intermedia/nigrescens strains (72.7%) were identified by MIC Test Strips as compared to E-test strips (54.5%), although both methods found the same proportion of amoxicillin "susceptible" strains (27.3%). For clindamycin, both susceptibility testing methods provided identical MIC values (EA value = 100%), and exactly the same distributions of "susceptible" and "resistant" strains of P. intermedia/nigrescens. For metronidazole, only very poor agreement (EA value = 9.1%) was found between the two susceptibility testing methods, with MIC Test Strips exhibiting markedly higher MIC50 and MIC90 values against P. intermedia/nigrescens as compared to E-test strips. However, the distribution of "susceptible" and "resistant" P. intermedia/ nigrescens were identical between the two susceptibility testing methods. For doxycycline, relatively good agreement (EA value = 72.7%) was found in MIC concentrations between the two susceptibility testing methods, although generally lower MIC values were associated with MIC Test Strips. In addition, identical distributions of "susceptible" and "resistant" P. intermedia/nigrescens were provided by both susceptibility testing methods. Conclusions: Relative to MIC values measured against periodontal strains of P. intermedia/nigrescens, MIC Test Strips gave higher MIC values with amoxicillin and metronidazole, equal MIC values with clindamycin, and lower MIC values with doxycycline, as compared to MIC values measured with the E-test assay. Relative to the identification of antibiotic "susceptible" periodontal P. intermedia/ nigrescens strains, both susceptibility testing methods provided identical findings, suggesting that both methods appear to be interchangeable for clinical decision making in regard to identification of antibiotic-sensitive strains of periodontal P. intermedia/nigrescens. However, for epidemiologic surveillance of drug susceptibility trends, where exact MIC values are important to track over time, the relatively higher proportion of non-exact MIC differences between the two susceptibility testing methods argues against using them interchangeably. Instead, one or the other method should be used consistently for such studies. Further comparative studies of the E-test and MIC Test Strip assays are indicated using other periodontopathic bacterial species besides P. intermedia/ nigrescens, and to assess the reproducibility of MIC values provided by both in vitro susceptibility testing methods over time.

      Folio, Cynthia; Latham, Edward David; Brunner, Matthew G. P.; Wright, Maurice, 1949- (Temple University. Libraries, 2017)
      Orchestral excerpts have become one of the most, if not the most important component of classical trumpet education in the last 50 years. This monograph discusses how trumpet orchestral excerpts grew in importance and how the demand for it catalyzed a rush for publications. As the number of trumpet players grew exponentially more than the supply of orchestral jobs, the mindset toward perfect performances of these excerpts began to narrow the focus of learning to an emphasis on technical proficiency. Context and the understanding of how the trumpet part relates to the other instruments in the orchestra are relegated in priority. This monograph aims to restore a holistic and comprehensive approach to learning with an in-depth analysis of harmony, compositional techniques, and historical and musical contexts.

      Metz, Andreas; Constantinou, Martha; Surrow, Bernd; Cichy, Krzysztof (Temple University. Libraries, 2021)
      It has been known since the 1930’s that protons and neutrons, collectively called as nucleons, are not “point-like” elementary particles, but rather have a substructure. Today, we know from Quantum Chromodynamics (QCD) that nucleons are made from quarks and gluons, with gluons being the elementary force carriers for strong interactions. Quarks and gluons are collectively called as partons. The substructure of the nucleons can be described in terms of parton correlation functions such as Form Factors, (1D) Parton Distribution Functions (PDFs) and their 3D generalizations in terms of Transverse Momentum-dependent parton Distributions (TMDs) and Generalized Parton Distributions (GPDs). All these functions can be derived from the even more general Generalized Transverse Momentum-dependent Distributions (GTMDs). This dissertation promises to provide an insight into all these functions from the point of view of their accessibility in experiments, from model calculations, and from their direct calculation within lattice formulations of QCD. In the first part of this dissertation, we identify physical processes to access GTMDs. By considering the exclusive double Drell-Yan process, we demonstrate, for the very first time, that quark GTMDs can be measured. We also show that exclusive double-quarkonium production in nucleon-nucleon collisions is a direct probe of gluon GTMDs. In the second part of this dissertation, we shift our focus to the “parton quasi-distributions”. Over the last few decades, lattice QCD extraction of the full x-dependence of the parton distributions has always been prohibited by the explicit time-dependence of the correlation functions. In 2013, there was a path-breaking proposal by X. Ji to calculate instead parton quasi-distributions (quasi-PDFs). The procedure of “matching” is a crucial ingredient in the lattice QCD extraction of parton distributions from the quasi-PDF approach. We address the matching for the twist-3 PDFs gT (x), e(x), and hL(x) for the very first time. We pay special attention to the challenges involved in the calculations due to the presence of singular zero-mode contributions. We also present the first-ever lattice QCD results for gT (x) and hL(x) and we discuss the impact of these results on the phenomenology. Next, we explore the general features of quasi-GPDs and quasi-PDFs in diquark spectator models. Furthermore, we address the Burkhardt-Cottingham-type sum rules for the relevant light-cone PDFs and quasi-PDFs in a model-independent manner and also check them explicitly in perturbative model calculations. The last part of this dissertation focuses on the extraction g1T (x,~k2⊥) TMD for the very first time from experimental data using Monte Carlo techniques. This dissertation therefore unravels different aspects of the distribution functions from varied perspectives.
    • A Conversation With Dance History: Movement and Meaning in the Cultural Body

      Welsh-Asante, Kariamu; Gordon, Lewis R. (Lewis Ricardo), 1962-; Meglin, Joellen A.; Weightman, Lindsay (Temple University. Libraries, 2008)
      This study regards the problem of a binary in dance discursive practices, seen in how "world dance" is separated from European concert dance. A close look at 1930s Kenya Luo women's dance in the context of "dance history" raises questions about which dances matter, who counts as a dancer, and how dance is defined. When discursive practices are considered in light of multicultural demographic trends and globalisation the problem points toward a crisis of reason in western discourse about how historical origins and "the body" have been theorised. Within a western philosophical tradition the body and experience are negated as a basis for theorising. Historical models and theories about race and gender often relate binary thinking whereby the body is theorised as text. An alternative theoretical model is established wherein dancers' processes of embodying historical meaning provide one of five bases through which to theorise. The central research questions this study poses and attempts to answer are: how can I illuminate a view of dance that is transhistorical and transnational? How can I write about 1930s Luo women in a way that does not create a case study to exist outside of dance history? Research methods challenge historical materialist frameworks for discussions of the body and suggest insight can be gained into how historical narratives operate with coercive power--both in past and present--by examining how meaning is conceptualised and experienced. The problem is situated inside a hermeneutic circle that connects past and present discourses, so tensions are explored between a binary model of past/present and new ways of thinking about dance and history through embodiment. Archives, elder interviews, and oral histories are a means to approach 1930s Luo Kenya. A choreography model is another method of inquiry where meanings about history and dance that subvert categories and binary assumptions are understood and experienced by dancers through somatic processes. A reflective narrative provides the means to untangle influences of disciplines like dance and history on the phenomenon of personal understanding.
    • A Costume Design for Pudd'nhead Wilson

      Chiment, Marie Anne (Temple University. Libraries, 2012)
      This thesis describes the process I used to research, draw, paint and finally design the costumes for Temple University's theatrical production of Pudd'nhead Wilson written by Charles Smith. I present the differences between the novel by Mark Twain and the script by Charles Smith. The early design process is then described including meetings with the director Doug Wager, the scenic designer Ian P. Guzzone and the lighting designer Christopher Hetherington. My preliminary costume research is illustrated in the text and the accompanying appendices followed by full color gouache renderings. I discuss the creation of the costumes from first fittings through final dress rehearsal and the challenges that were overcome. A separate chapter, including costume and make-up design, is dedicated to the controversial character of The Minstrel. Finally, the conclusion contains images enabling the reader to visualize the design of each character from preliminary collage, rendering and finally a production photo. These are combined with my thoughts and reflections on the final designs.
    • A Critical Afrocentric Reading of the Artist's Responsibility in the Creative Process

      Asante, Molefi Kete, 1942-; Johnson, Amari; Nehusi, Kimani S. K.; Williams-Witherspoon, Kimmika (Temple University. Libraries, 2020)
      This study explores creative expression as a form and function of activism, self-determination, self-actualization, community transformation, and cultural resilience/survival. Initiating this probe into the vast topic, the study begins with the following set of research questions: What is the highest responsibility of African artists? Is it to the work of art itself—to pursue an object perceived as an island of form and symbol with little or no reference to other life experiences that lends itself to urgent, relevant social interpretation; is it to identify and promote one’s self as an individual seeking self-glorification and or commendation, to prove humanity and/or worthiness to others, or to intensify the advancement toward the total liberation of all African people? This decidedly theoretical endeavor primarily concerns itself with African creative expressions (literary creations, cultural performance, visual and musical expressions) within the constructed boundaries of the United States of America that included not only a historical overview of the earliest extant Black cultural creations, but also an evaluation of the socio-historical and political context in which African artists—with distinctive attention on musicians and visual artists—flourished within the nineteenth and twentieth centuries, including those contemporary artists who continue to thrive in the twenty-first century. Among other issues, this treatise specifically ponders relative to the moral and ethical obligation of African artists’ is the challenge African creatives face in making political and creative expressions synonymous.

      Wagner, Elvis; Sniad, Tamara; Kanno, Yasuko, 1965-; Davis, James Earl, 1960- (Temple University. Libraries, 2019)
      As contemporary federal education legislation requires schools to ensure that all students are prepared for college and careers upon graduation, the college and career readiness of ELs is an urgent matter requiring investigation. Within this policy context, career and technical education (CTE) has been presented as a potential pathway for ELs to achieve college and career readiness. This necessitates research examining ELs’ opportunities to participate in CTE programs as an alternative to traditional secondary schools. Thus, the purpose of this dissertation is (a) to examine the processes required to access CTE programs and the barriers ELs face when attempting to enroll in CTE, (b) to understand how institutional culture and the distribution of resources support ELs and instructors with ELs in their courses, and (c) to investigate ELs’ classroom experiences and opportunities to learn, as understood by the students, teachers, and administrators in a school dedicated to CTE programming. Drawing on ethnographic methodology, data were collected through fieldwork and classroom observations documented as fieldnotes; 36 in-depth interviews with teachers, administrators, ELs and former ELs; artifacts from classrooms; policy documents; student academic records; and state-level data from the Department of Education. The data analysis demonstrated that, overall, ELs did not experience equitable access to educational experiences leading to college and career readiness. First, ELs’ access to CTE programs that aligned with their career aspirations was restricted; administrators and counselors justified this practice through discourses of meritocracy and deficit framing of ELs. Second, despite the fact that ELs and instructors complained about the lack of support and resources, administrators drew upon race- and language-neutral ideologies to rationalize their failure to invest in programs and practices that would ensure equitable access and success for ELs. Finally, within this context of limited support, instructors expressed deficit views of ELs and relied on pedagogies that did not accommodate the linguistic needs of ELs. As a result, ELs believed that they did not receive adequate support, and many felt unprepared for college and careers. Interpreting these data from a critical race theory perspective, these findings suggest that CTE functions as a White educational space, operating under tacit White supremacist ideologies to justify inequitable treatment of ELs and privilege the cultural and linguistic practices of White students. This undermines CTE’s potential in providing equitable access to college and career readiness for ELs.