Wengryniuk, Sarah E.; Dobereiner, Graham; Wang, Rongsheng; Watson, Mary P. (Temple University. Libraries, 2021)
      The development of new strategies and associated reagents that enable previously inaccessible synthetic disconnections is largely attributing to the remarkable progress in exploring new chemical space for drug discovery and innovative complex molecule syntheses. In the Wengryniuk laboratory, we are devoted to discovering new synthetic methodologies that are based on umpolung or reverse polarity, strategies, enabled by Nitrogen-ligated (bis)cationic hypervalent iodine reagents (N-HVIs). I(III) N-HVIs represent an attractive new class of oxidant as they are environmentally benign, highly tunable, and have shown ability in enabling distinguished modes of reactivity. This dissertation focuses on demonstrating the synthetic utility of these N-HVI reagents towards C–O bond formation via a reverse polarity approach.In Chapter 1, a summary of the reactivity and characteristics of hypervalent iodine reagents is provided. Chapter 2 describes a mild and metal-free strategy for alcohol oxidation mediated by I(III) N-HVI reagents. This method demonstrates the first method for chemoselective oxidation of equatorial over axial alcohols and was the first in situ synthesis and application of N-HVIs for a simple one-pot procedure. Chapter 3 discusses a novel strategy for a dual C–H functionalization to access functionalized chroman scaffolds via an umpolung oxygen activation cyclization cascade. Computational studies in collaboration with Prof. Dean Tantillo (UC-Davis) along with experimental probes in our laboratory, support the formation of an umpoled oxygen intermediate as well as competitive direct and spirocyclization pathways for the key C–O bond forming event. The utility of the developed method is demonstrated through a downstream derivatization of the iodonium salt moiety to access C–H, C–X, and C–C substitution via established Pd-catalyzed cross couplings. Total synthesis of (±)-conicol natural product was performed in 8 steps and 23% overall yield, further demonstrating the synthetic utility of the developed method. Key synthetic steps include a smooth construction of the chroman core via N-HVI mediated C–H etherification of a pendant alcohol followed by a late-stage double bond installation. Overall, this dissertation summarizes the current state of research enabled by N-HVI reagents, with a focus on their utility in reverse polarity heteroatom activation strategies, and it serves as a practical guide for future development in the field.
    • Novel Word Learning as a Treatment of Word Processing Disorders in Aphasia

      Martin, Nadine, 1952-; DeDe, Gayle; Kohen, Francine (Temple University. Libraries, 2018)
      Research suggests that novel word learning tasks engage both verbal short-term memory (STM) and lexical processing, and may serve as a potential treatment for word processing and functional language in aphasia (e.g., Gupta, Martin, Abbs, Schwartz, Lipinski, 2006; Tuomiranta, Grönroos, Martin, & Laine, 2014). The purpose of this study was to gain support for the hypotheses that novel word learning engages verbal STM and lexical access processes and can be used to promote improvements in these abilities in treatment of aphasia. We used a novel word learning task as a treatment with three participants: KT, UP, and CN, presenting with different types and severities of aphasia and predicted that treatment would result in (1) acquisition of trained novel words (2) improved verbal STM capacity and (3) improved access to and retrieval of real words. Twenty novel words were trained for 1 hour x 2 days/week x 4 weeks. Language and learning measures were administered pre- and post-treatment. All three participants showed receptive learning and some improvement on span tasks, while UP and CN demonstrated some expressive learning. KT also improved in performance on the Peabody Picture Vocabulary Test and the Philadelphia Naming Test. UP showed significant improvement on proportion Correct Information Units (CIUs) in discourse. CN showed some minimal improvement in narrative production for proportion CIUs and proportion of closed class words. These findings support that novel word learning treatment, which engages verbal STM processes and lexical retrieval pathways, can improve input lexical processing. Theoretically, this study provides further evidence for models that propose common mechanisms supporting novel word learning, short-term memory, and lexical processing.

      Yang, Weidong, Dr.; Sheffield, Joel B.; Tanaka, Jacqueline; Guo, Wei (Temple University. Libraries, 2018)
      The nucleus of eukaryotic cells is a vitally important organelle that sequesters the genetic information of the cell, and protects it with the help of two highly evolved structures, the nuclear envelope (NE) and nuclear pore complexes (NPCs). Together, these two structures mediate the bidirectional trafficking of molecules between the nucleus and cytoplasm by forming a barrier. NE transmembrane proteins (NETs) embedded in either the outer nuclear membrane (ONM) or the inner nuclear membrane (INM) play crucial roles in both nuclear structure and functions, including: genome architecture, epigenetics, transcription, splicing, DNA replication, nuclear structure, organization and positioning. Furthermore, numerous human diseases are associated with mutations and mislocalization of NETs on the NE. There are still many fundamental questions that are unresolved with NETs, but we focused on two major questions: First, the localization and transport rate of NETs, and second, the transport route taken by NETs to reach the INM. Since NETs are involved with many of the mechanisms used to maintain cellular homeostasis, it is important to quantitatively determine the spatial locations of NETs along the NE to fully understand their role in these vital processes. However, there are limited available approaches for this task, and moreover, these methods provide no information about the translocation rates of NETs between the two membranes. Furthermore, while the trafficking of soluble proteins between the cytoplasm and the nucleus has been well studied over the years, the path taken by NETs into the nucleus remains in dispute. At least four distinct models have been proposed to suggest how transmembrane proteins destined for the INM cross the NE through NPC-dependent or NPC-independent mechanisms, based on specific features found on the soluble domains of INM proteins. In order to resolve these two major questions, it is necessary to employ techniques with the capabilities to observe these dynamics at the nanoscale. Current experimental techniques are unable to break the temporal and spatial resolution barriers required to study these phenomena. Therefore, we developed and modified single-molecule techniques to answer these questions. First, to study the distribution of NETs on the NE, we developed a new single-molecule microscopy method called single-point single-molecule fluorescence recovery after photobleaching (smFRAP), which is able to provide spatial resolution <10 nm and, furthermore, provide previously unattainable information about NET translocation rates from the ONM to INM. Secondly, to examine the transport route used by NETs destined for the INM, we used a single-molecule microscopy technique previously developed in our lab called single-point edge-excitation sub-diffraction (SPEED) microscopy, which provides spatio-temporal resolution of <10 nm precision and 0.4 ms detection time. The major findings from my doctoral research work can be classified into two categories: (i) Technical developments to study NETs in vivo, and (ii) biological findings from employing these microscopy techniques. In regards to technical contributions, we created and validated of a new single-molecule microscopy method, smFRAP, to accurately determine the localization and distribution ratios of NETs on both the ONM and INM in live cells. Second, we adapted SPEED microscopy to study transmembrane protein translocation in vivo. My work has also contributed four main biological findings to the field: first, we determined the in vivo translocation rates for lamin-B receptor (LBR), a major INM protein found in the nucleus of cells. Second, we verified the existence of peripheral channels in the scaffolding of NPCs and, for the first time, directly observed the transit of INM proteins through these channels in live cells. Third, our research has elucidated the roles that both the nuclear localization signal (NLS) and intrinsically disordered (ID) domains play in INM protein transport. Finally, my work has elucidated which transport routes are used by NETs destined to localize in the INM.

      Stevens, Roy H.; Lu, Yongbo; Nissan, Roni (Temple University. Libraries, 2012)
      Objective: To examine Dentine matrix protein 1(DMP1) and Dentine sialophosphoprotein (DSPP) expression and subcellular localization in various cell lines to better understand their function. Methods: RT-PCR, immunofluorescent staining and Western-blotting analyses were used to determine the expression and subcellular localization of DMP1 and DSPP in various cell lines, including odontoblast-like (17IIA11), preosteoblast (MC3T3-E1), mesenchymal cells (C3H10T1/2), and human dental pulp stem cells (DPSC). In addition, a haemagglutinin (HA) tagged DMP1 expression construct was generated and examined for its subcellular localization in COS-7 cells. Results: Western-blot analysis showed the presence of DMP1 and DSPP in the cytoplasmic and nuclear extracts of MC3T3-E1, 17IIA11, and C3H10T1/2 cells. DMP1 and DSPP transcripts were consistently detected in all three cell lines by RT-PCR analysis. However, immunofluorescent detection of DMP1 revealed the presence of two distinct subpopulations of cells with either nuclear or cytoplasmic staining; this phenomenon was not noticed with DSPP immunofluorescence. Nuclear and cytoplasmic DMP1 was confirmed in MC3T3-E1 cells by immuofluorescent staining using a rabbit polyclonal antibody; and the staining was inhibited when the antibody was preincubated with the synthetic peptide used to generate the antibody, confirming the specificity of the antibody. Nuclear and cytoplasmic localization was also observed in COS-7 cells transfected with HA-tagged DMP1 expression construct when detected with an antibody against the HA tag. Conclusion: These findings suggest that, apart from their role as a constituent of dentin/bone matrix, both DMP1 and DSPP might play a regulatory or structural role in the nucleus that is not unique to the odontoblast/osteoblast cells.

      Qiu, Songgang; Ren, Fei; Vainchtein, Dmitri; Tehrani, Rouzbeh Afsarmanesh (Temple University. Libraries, 2016)
      Thermal energy storage systems as an integral part of concentrated solar power plants improve the performance of the system by mitigating the mismatch between the energy supply and the energy demand. Using a phase change material (PCM) to store energy increases the energy density, hence, reduces the size and cost of the system. However, the performance is limited by the low thermal conductivity of the PCM, which decreases the heat transfer rate between the heat source and PCM, which therefore prolongs the melting, or solidification process, and results in overheating the interface wall. To address this issue, heat pipes are embedded in the PCM to enhance the heat transfer from the receiver to the PCM, and from the PCM to the heat sink during charging and discharging processes, respectively. In the current study, the thermal-fluid phenomenon inside a heat pipe was investigated. The heat pipe network is specifically configured to be implemented in a thermal energy storage unit for a concentrated solar power system. The configuration allows for simultaneous power generation and energy storage for later use. The network is composed of a main heat pipe and an array of secondary heat pipes. The primary heat pipe has a disk-shaped evaporator and a disk-shaped condenser, which are connected via an adiabatic section. The secondary heat pipes are attached to the condenser of the primary heat pipe and they are surrounded by PCM. The other side of the condenser is connected to a heat engine and serves as its heat acceptor. The applied thermal energy to the disk-shaped evaporator changes the phase of working fluid in the wick structure from liquid to vapor. The vapor pressure drives it through the adiabatic section to the condenser where the vapor condenses and releases its heat to a heat engine. It should be noted that the condensed working fluid is returned to the evaporator by the capillary forces of the wick. The extra heat is then delivered to the phase change material through the secondary heat pipes. During the discharging process, secondary heat pipes serve as evaporators and transfer the stored energy to the heat engine. Due to the different geometry of the heat pipe network, a new numerical procedure was developed. The model is axisymmetric and accounts for the compressible vapor flow in the vapor chamber as well as heat conduction in the wall and wick regions. Because of the large expansion ratio from the adiabatic section to the primary condenser, the vapor flow leaving the adiabatic pipe section of the primary heat pipe to the disk-shaped condenser behaves similarly to a confined jet impingement. Therefore, the condensation is not uniform over the main condenser. The feature that makes the numerical procedure distinguished from other available techniques is its ability to simulate non-uniform condensation of the working fluid in the condenser section. The vapor jet impingement on the condenser surface along with condensation is modeled by attaching a porous layer adjacent to the condenser wall. This porous layer acts as a wall, lets the vapor flow to impinge on it, and spread out radially while it allows mass transfer through it. The heat rejection via the vapor condensation is estimated from the mass flux by energy balance at the vapor-liquid interface. This method of simulating heat pipe is proposed and developed in the current work for the first time. Laboratory cylindrical and complex heat pipes and an experimental test rig were designed and fabricated. The measured data from cylindrical heat pipe were used to evaluate the accuracy of the numerical results. The effects of the operating conditions of the heat pipe, heat input, and portion of heat transferred to the phase change material, main condenser geometry, primary heat pipe adiabatic radius and its location as well as secondary heat pipe configurations have been investigated on heat pipe performance. The results showed that in the case with a tubular adiabatic section in the center, the complex interaction of convective and viscous forces in the main condenser chamber, caused several recirculation zones to form in this region, which made the performance of the heat pipe convoluted. The recirculation zone shapes and locations affected by the geometrical features and the heat input, play an important role in the condenser temperature distributions. The temperature distributions of the primary condenser and secondary heat pipe highly depend on the secondary heat pipe configurations and main condenser spacing, especially for the cases with higher heat inputs and higher percentages of heat transfer to the PCM via secondary heat pipes. It was found that changing the entrance shape of the primary condenser and the secondary heat pipes as well as the location and quantity of the secondary heat pipes does not diminish the recirculation zone effects. It was also concluded that changing the location of the adiabatic section reduces the jetting effect of the vapor flow and curtails the recirculation zones, leading to higher average temperature in the main condenser and secondary heat pipes. The experimental results of the conventional heat pipe are presented, however the data for the heat pipe network is not included in this dissertation. The results obtained from the experimental analyses revealed that for the transient operation, as the heat input to the system increases and the conditions at the condenser remains constant, the heat pipe operating temperature increases until it reaches another steady state condition. In addition, the effects of the working fluid and the inclination angle were studied on the performance of a heat pipe. The results showed that in gravity-assisted orientations, the inclination angle has negligible effect on the performance of the heat pipe. However, for gravity-opposed orientations, as the inclination angle increases, the temperature difference between the evaporator and condensation increases which results in higher thermal resistance. It was also found that if the heat pipe is under-filled with the working fluid, the capillary limit of the heat pipe decreases dramatically. However, overfilling of the heat pipe with working fluid degrades the heat pipe performance due to interfering with the evaporation-condensation mechanism.
    • Numerical Magnitude Knowledge: Are All Numbers Perceived Alike?

      Booth, Julie L.; Gunderson, Elizabeth; Hindman, Annemarie H.; Byrnes, James P. (Temple University. Libraries, 2017)
      A robust knowledge of numbers, and their magnitudes, is thought to provide students a strong basis for later mathematics learning and achievement (see Siegler, 2016). The current study examined 7th grade students’ (N = 193) knowledge of numerical magnitudes, how this knowledge varied depending on the number’s type (integer or non-integer) and the number’s polarity (positive or negative), and the strategies that students use while estimating different types of numbers. The first experiment of the current study assessed students’ magnitude knowledge through a number line packet that used all-positive, all-negative, and bidirectional scales that spanned from negative to positive numbers; on these number line scales, students were asked to estimate whole numbers, fractions, and decimals. While prior literature has commonly assessed magnitude knowledge of positive integers (i.e., whole numbers) and non-integers (i.e., non-whole numbers), and the literature on negative numbers is growing, the current study is the first to directly explore students’ understanding of positive and negative magnitudes together with the use of all-negative and all-positive number line scales. Results from mixed linear models illustrated that a number’s polarity affects students’ estimates on the all-positive and all-negative scales, as estimates of negative and positive numbers differed in both accuracy and linearity. However, negative and positive estimates on the bidirectional scales were not significantly different from one another. Composite scores were created to reflect students’ performance on four types of number line scales, those that asked students to estimate positive integers, negative integers, positive non-integers, and negative non-integers. Analyses with these composite scores established that both polarity and number type separately affect students’ estimates—negative estimates had more error and were less linear than positive estimates, and non-integer estimates had more error and were less linear than integer estimates. The second experiment of this study used a think-aloud task to examine the strategies that students used while completing the number line task, and how these strategies differed depending on the number line’s overall scale, polarity, and the type of number being estimated (i.e., integers or non-integers). While some strategies were found to be prevalent across all types of number line scales, other strategy choices differed depending on the polarity of the scale, or the type of numbers being estimated. Findings from this study support the integrated theory of numerical development; mainly, that by the 7th grade students have integrated their knowledge of numbers into a unified system that houses both positive and negative numbers, and integers and non-integers. Educational implications are also discussed.
    • Object Trackers Performance Evaluation and Improvement with Applications using High-order Tensor

      Ling, Haibin; Latecki, Longin; Tan, Chiu C.; Yan, Qimin (Temple University. Libraries, 2020)
      Visual tracking is one of the fundamental problems in computer vision. This topic has been a widely explored area attracting a great amount of research efforts. Over the decades, hundreds of visual tracking algorithms, or trackers in short, have been developed and a great packs of public datasets are available alongside. As the number of trackers grow, it then becomes a common problem how to evaluate who is a better tracker. Many metrics have been proposed together with tons of evaluation datasets. In my research work, we first make an application practice of tracking multiple objects in a restricted scene with very low frame rate. It has a unique challenge that the image quality is low and we cannot assume images are close together in a temporal space. We design a framework that utilize background subtraction and object detection, then we apply template matching algorithms to achieve the tracking by detection. While we are exploring the applications of tracking algorithm, we realize the problem when authors compare their proposed tracker with others, there is unavoidable subjective biases: it is non-trivial for the authors to optimize other trackers, while they can reasonably tune their own tracker to the best. Our assumption is based on that the authors will give a default setting to other trackers, hence the performances of other trackers are less biased. So we apply a leave-their-own-tracker-out strategy to weigh the performances of other different trackers. we derive four metrics to justify the results. Besides the biases in evaluation, the datasets we use as ground truth may not be perfect either. Because all of them are labeled by human annotators, they are prone to label errors, especially due to partial visibility and deformation. we demonstrate some human errors from existing datasets and propose smoothing technologies to detect and correct them. we use a two-step adaptive image alignment algorithm to find the canonical view of the video sequence. then use different techniques to smooth the trajectories at certain degrees. The results show it can slightly improve the trained model, but would overt if overcorrected. Once we have a clear understanding and reasonable approaches towards the visual tracking scenario, we apply the principles in multi-target tracking cases. To solve the problem, we formulate it into a multi-dimensional assignment problem, and build the motion information in a high-order tensor framework. We propose to solve it using rank-1 tensor approximation and use a tensor power iteration algorithm to efficiently obtain the solution. It can apply in pedestrian tracking, aerial video tracking, as well as curvalinear structure tracking in medical video. Furthermore, this proposed framework can also fit into the affinity measurement of multiple objects simultaneously. We propose the Multiway Histogram Intersection to obtain the similarities between histograms of more than two targets. With the solution of using tensor power iteration algorithm, we show it can be applied in a few multi-target tracking applications.
    • Objectivity and Autonomy in the Newsroom: A Field Approach

      Garrett, Paul B., 1968-; Jhala, Jayasinhji; Kitch, Carolyn L. (Temple University. Libraries, 2008)
      This dissertation provides a better understanding of how journalists attain their personal and occupational identities. In particular, I examine the origins and meanings of journalistic objectivity as well as the professional autonomy that is specific to journalism. Journalists understand objectivity as a worldview, value, ideal, and impossibility. A central question that remains is why the term objectivity has become highly devalued in journalistic discourse in the past 30 years, a puzzling development considered in light of evidence that "objectivity" remains important in American journalism. I use Bourdieu's notion of field to explore anthropological ways of looking at objectivity, for instance, viewing it as a practice that distinguishes journalists from other professionals as knowledge workers. Applying notions of field to the journalistic field through anthropological methods and perspective permits the linkage of microlevel perspectives to macrolevel social phenomena. The dissertation demonstrates how qualitative research on individuals and newsroom organizations can be connected to the field of journalism in the United States. Additionally, it offers insight into why journalists continue to embrace objectivity, even as they acknowledge its deficiencies as a journalistic goal.

      DuCette, Joseph P.; Cromley, Jennifer; Schifter, Catherine; Shapiro, Joan Poliner; Fullard, William (Temple University. Libraries, 2009)
      Exploration into teacher competency of various types has gone on for quite some time. An untapped resource regarding teacher expertise is that of the students' perceptions of teacher expertise, particularly the ability of students to identify the types of behaviors that expert and non-expert teachers exhibit in the classroom. The frequency and variety of expert behaviors in the high school classroom were investigated in this study. High school teachers (n = 25) were observed during regular class periods using the Teacher Behavior Checklist, a checklist of behaviors developed for this study from discussions with high school students, teachers, administrators, and existing teacher competency literature. Results suggest discrimination of expert and non-expert teachers similar to Berliner (2001). Agreement among students' perception of expertise, classroom observations, and the literature suggest that high school students are capable of accurately identifying expert and non-expert behaviors of teachers. Further, some data suggest that expert teachers draw from a narrower behavioral scheme and exhibit expert designated behaviors more often than do their non-expert colleagues. This study highlights the need to close the evaluative loop through the utilization of student perception.
    • Occupational Therapy Level II Fieldwork: Effectiveness in Preparing Students for Entry-Level Practice

      DuCette, Joseph P.; Kinnealey, Moya; Schifter, Catherine; Weiss, Donna (Donna F.); Fullard, William (Temple University. Libraries, 2009)
      Occupational therapy (OT) is a rehabilitation profession in which licensed therapists facilitate functional independence, to the greatest extent possible, of an individual with disabilities. Education for OT is at the Master’s level consisting of a two-year academic program followed by clinical Fieldwork II, a required 12-week internship under the mentorship of a licensed therapist with at least one year’s experience. In light of the fact that clinical fieldwork sites differ in size and resources, and clinical instructors may have only one year’s experience and no formal training in instruction, there is great variability in students’ clinical fieldwork experiences. The purpose of this study was to determine novice rehab OT’s perceptions of four key factors in clinical education: First, skill areas in which they felt most prepared; second, areas perceived as obstacles in adjustment to entry-level practice; third, essential elements of an ideal clinical learning environment; and fourth, the need for credentialing clinical instructors. Participants were 1-3 years post rehab fieldwork with first job in rehab. An online survey (N=45) and audiotaped interviews (N=9) were utilized to collect data on the perceptions of new OT’s on Fieldwork II experiences. Interviewees represented a convenience sample independent of survey participants. Most participants reported feeling prepared to perform basic clinical skills, communicate on interdisciplinary teams and seek mentorship in the workplace. Less proficiency was perceived in the areas of patient/family communication, and coping with reality shock (adjustment to real life practice). Over half of the participants felt that there should be some kind of mandatory credentialing for clinical instructors. There was consensus among OT’s regarding the ideal Fieldwork II setting which included well-trained instructors, availability for onsite learning and a well-equipped clinical site.
    • Occurrence and Evaluation of White Spot Lesions in Orthodontic Patients: A Pilot Study

      Sciote, James J.; Godel, Jeffrey H.; Tellez Merchán, Marisol (Temple University. Libraries, 2014)
      Orthodontic treatment may cause an increase in the rate of enamel decalcification on tooth surfaces, producing White Spot Lesions (WSL). Orthodontic patients are at a higher risk for decalcification because orthodontic appliances retain food debris which leads to increased plaque formation. Dental plaque, an oral biofilm formed by factors including genetics, diet, hygiene, and environment, contains acid producing bacterial strains with a predominance of Mutans Streptococcus (MS). MS and others metabolize oral carbohydrates during ingestion, the byproducts of which acidify the biofilm to begin a process of enamel decalcification and formation of WSL. This study tests if patients in orthodontic treatment at Temple University can be used as subjects for further longitudinal study of WSL risk factors. Twenty patients between the ages of ten to eighteen after three months or greater of treatment were enrolled to determine if duration of treatment, hygiene, sense of coherence, obesity, diet frequencies, age and gender correlated with development of WSL. Of these, age is positively correlated with the number of untreated decayed surfaces. WSL and plaque levels may negatively correlate with increased brushing frequency and duration, while flossing frequency demonstrated a statistically significant negative correlation. This population may be suitable for further study because of its high incidence of WSL (75%), however difficulty in enrollment and patient attrition necessitates that future studies be modified.
    • Ocean Acidification and the Cold-Water Coral Lophelia pertusa in the Gulf of Mexico

      Cordes, Erik E.; Kulathinal, Rob J.; Sanders, Robert W.; Tanaka, Jacqueline; Fisher, Charles R. (Charles Raymond) (Temple University. Libraries, 2013)
      Ocean acidification is the reduction in seawater pH due to the absorption of anthropogenic carbon dioxide by the oceans. Reductions in seawater pH can inhibit the precipitation of aragonite, a calcium carbonate mineral used by marine calcifiers such as corals. Lophelia pertusa is a cold-water coral that forms large reef structures which enhance local biodiversity on the seafloor, and is found commonly from 300-600 meters on hard substrata in the Gulf of Mexico. The present study sought to investigate the potential impacts of ocean acidification on L. pertusa in the Gulf of Mexico through combined field and laboratory analyses. A field component characterized the carbonate chemistry of L. pertusa habitats in the Gulf of Mexico, an important step in establishing a baseline from which future changes in seawater pH can be measured, in addition to collecting in situ data for the design and execution of perturbation experiments in the laboratory. A series of recirculating aquaria were designed and constructed for the present study, and support the maintenance and experimentation of live L. pertusa in the laboratory. Finally, experiments testing L. pertusa's mortality and growth responses to ocean acidification were conducted in the laboratory, which identified thresholds for calcification and a range of sensitivities to ocean acidification by individual genotype. The results of this study permit the monitoring of ongoing ocean acidification in the deep Gulf of Mexico, and show that ocean acidfication's impacts may not be consistent across individuals within populations of L. pertusa.
    • Of Roads and Revolutions: Peasants, Property, and the Politics of Development in La LIbertad, Chontales (1895-1995)

      Goode, Judith, 1939-; White, Sydney Davant; Walker, Kathy Le Mons; Patterson, Thomas C. (Thomas Carl), 1937- (Temple University. Libraries, 2010)
      This dissertation analyzes the political-economy of agrarian social relations and uneven development in La Libertad, Chontales, Nicaragua. It locates the development of agrarian structures and municipal politics at the interstices of local level processes and supra-local political-economic projects, i.e., an expanding world market, Nicaraguan nation-state and class formation, and U.S. imperialism. The formation and expansion of private property in land and the contested placement of municipal borders forms the primary locus for this analysis of changing agrarian relations. Over the course of the century explored in this dissertation, the uneven development of class and state power did not foster capitalist relations of production (i.e., increasing productivity based on new investment, development of the forces of production, proletarianization) and did not entail the disappearance of peasant producers; rather, peasant producers proliferated. Neither emerging from a pre-capitalist past nor forging a (classically) capitalist present, classes and communities were shaped through constant movement (e.g., waves of migration and population movements, upward and downward mobility) and structured by forms of accumulation rooted in extractive economic practices and forms of dependent-commercial capitalism on the one hand, and the politics of state - including municipal - formative dynamics on the other. The proliferation of peasant producers, both constrained and made possible by these processes, depended upon patriarchal relations (through which family labor was mobilized and landownership and use framed) and an expansive frontier (through which land pressure was relieved and farm fragmentation mitigated), although larger ranchers and landlords depended upon and benefited from these as well, albeit in different ways. The social relations among different classes and strata were contradictory, entailing forms of dependence, subordination, and exploitation as well as identification and affinity. In the context of the Sandinista revolution, these ties created the basis for a widely shared counterrevolutionary political stance across classes and strata while these class and strata distinctions conditioned the specificities and experiences of opposition.
    • Old Stories and New Visualizations: Digital Timelines as Public History Projects

      Bruggeman, Seth C., 1975-; Lowe, Hilary Iris; Dorman, Dana (Temple University. Libraries, 2015)
      This thesis explores the use and potential of digital timelines in public history projects. Digital timelines have become a popular and accessible ways for institutions and individuals to write history. The history of timelines indicates that people understand timelines as authoritative information visualizations because they represent concrete events in absolute time. The goals of public history often conflict with the linear, progressive nature of most timelines. This thesis reviews various digital timeline tools and uses The Print Center's Centennial Timeline as an in-depth case study that takes into account the multifaceted factors involved in creating a digital timeline. Digital history advocates support digital scholarship as an alternative to traditional narrative writing. This thesis illustrates that digital timelines can enable people to visualize history in unexpected ways, fostering new arguments and creative storytelling. Despite their potential, digital timelines often replicate the conventions of their paper counterparts because of the authoritative nature of the timeline form.
    • Older and Weaker or Older and Wiser: Exploring the Drivers of Performance Differences in Young and Old Adults on Experiential Learning Tasks in the Presence of Veridical Feedback

      Eisenstein, Eric; Morrin, Maureen; Mudambi, Susan; Ruvio, Ayalla (Temple University. Libraries, 2016)
      This dissertation proposes that while traditional cognitive psychology literature suggests that cognitive function decreases with age, these decreases are dependent on the types of testing being performed. While traditional cognitive tests of memory and processing speed show declines associated with age, this research suggests these declines are not robust across all types of learning. The coming pages present four studies aimed at furthering our understanding of how different age cohorts of consumers learn about products in active and complex marketplaces. Study one reveals an age advantage associated with learning experientially; an interesting and somewhat surprising result that warrants further investigation given the rapid rate at which populations are aging. The additional studies presented here begin that investigation through the application of several psychological theories. This research explores increased vigilance associated with the security motivation system (based on the principles of evolutionary psychology), the possible impact of mortality salience through the application of Terror Management Theory and a positive correlation between age and cognitive control, as possible explanations.

      Dragut, Eduard Constantin; Guo, Yuhong; Zhang, Kai; Shi, Justin Y.; Meng, Weiyi (Temple University. Libraries, 2020)
      Data plays the key role in almost every field of computer sciences, including knowledge graph field. The type of data varies across fields. For example, the data type of knowledge graph field is knowledge triples, while it is visual data like images and videos in computer vision field, and textual data like articles and news in natural language processing field. Data could not be utilized directly by machine learning models, thus data representation learning and feature design for various types of data are two critical tasks in many computer sciences fields. Researchers develop various models and frameworks to learn and extract features, and aim to represent information in defined embedding spaces. The classic models usually embed the data in a low-dimensional space, while neural network models are able to generate more meaningful and complex high-dimensional deep features in recent years. In knowledge graph field, almost every approach represent entities and relations in a low-dimensional space, because there are too many knowledge and triples in real-world. Recently a few approaches apply neural networks on knowledge graph learning. However, these models are only able to capture local and shallow features. We observe the following three important issues with the development of feature learning with neural networks. On one side, neural networks are not black boxes that work well in every case without specific design. There is still a lot of work to do about how to design and propose more powerful and robust neural networks for different types of data. On the other side, more studies about utilizing these representations and features in many applications are necessary. What's more, traditional representations and features work better in some domains, while deep representations and features perform better on other domains. Transfer learning is introduced to bridge the gap between domains and adapt various type of features for many tasks. In this dissertation, we aim to solve the above issues. For knowledge graph learning task, we propose a few important observations both theoretically and practically for current knowledge graph learning approaches, especially for knowledge graph learning based on Convolutional Neural Networks. Besides the work in knowledge graph field, we not only develop different types of feature and representation learning frameworks for various data types, but also develop effective transfer learning algorithm to utilize the features and representations. The obtained features and representations by neural networks are utilized successfully in multiple fields. Firstly, we analyze the current issues on knowledge graph learning models, and present eight observations for existing knowledge graph embedding approaches, especially for approaches based on Convolutional Neural Networks. Secondly, we proposed a novel unsupervised heterogeneous domain adaptation framework that could deal with features in various types. Multimedia features are able to be adapted, and the proposed algorithm could bridge the representation gap between the source and target domains. Thirdly, we propose a novel framework to learn and embed user comments and online news data in unit of sessions. We predict the article of interest for users with deep neural networks and attention models. Lastly, we design and analyze a large number of features to represent dynamics of user comments and news article. The features span a broad spectrum of facets including news article and comment contents, temporal dynamics, sentiment/linguistic features, and user behaviors. Our main insight is that the early dynamics from user comments contribute the most to an accurate prediction, while news article specific factors have surprisingly little influence.
    • On Generalized Solutions to Some Problems in Electromagnetism and Geometric Optics

      Gutiérrez, Cristian E., 1950-; Berhanu, Shiferaw; Mendoza, Gerardo A.; Strain, Robert M. (Temple University. Libraries, 2016)
      The Maxwell equations of electromagnetism form the foundation of classical electromagnetism, and are of interest to mathematicians, physicists, and engineers alike. The first part of this thesis concerns boundary value problems for the anisotropic Maxwell equations in Lipschitz domains. In this case, the material parameters that arise in the Maxwell system are matrix valued functions. Using methods from functional analysis, global in time solutions to initial boundary value problems with general nonzero boundary data and nonzero current density are obtained, only assuming the material parameters are bounded and measurable. This problem is motivated by an electromagnetic inverse problem, similar to the classical Calder\'on inverse problem in Electrical Impedance Tomography. The second part of this thesis deals with materials having negative refractive index. Materials which possess a negative refractive index were postulated by Veselago in 1968, and since 2001 physicists were able to construct these materials in the laboratory. The research on the behavior of these materials, called metamaterials, has been extremely active in recent years. We study here refraction problems in the setting of Negative Refractive Index Materials (NIMs). In particular, it is shown how to obtain weak solutions (defined similarly to Brenier solutions for the Monge-Amp\`ere equation) to these problems, both in the near and the far field. The far field problem can be treated using Optimal Transport techniques; as such, a fully nonlinear PDE of Monge-Amp\`ere type arises here.
    • On Group-Sequential Multiple Testing Controlling Familywise Error Rate

      Sarkar, S. K. (Sanat K.); Han, Xu; Zhao, Zhigen; Tang, Cheng Yong; Rom, Dror (Temple University. Libraries, 2015)
      The importance of multiplicity adjustment has gained wide recognition in modern scientific research. Without it, there will be too many spurious results and reproducibility becomes an issue; with it, if overtly conservative, discoveries will be made more difficult. In the current literature on repeated testing of multiple hypotheses, Bonferroni-based methods are still the main vehicle carrying the bulk of multiplicity adjustment. There is room for power improvement by suitably utilizing both hypothesis-wise and analysis- wise dependencies. This research will contribute to the development of a natural group-sequential extension of the classical stepwise multiple testing procedures, such as Dunnett’s stepdown and Hochberg’s step-up procedures. It is shown that the proposed group-sequential procedures strongly control the familywise error rate while being more powerful than the recently developed class of group-sequential Bonferroni-Holm’s procedures. Particularly in this research, a convexity property is discovered for the distribution of the maxima of pairwise null P-values with the underlying test statistics having distributions such as bivariate normal, t, Gamma, F, or Archimedean copulas. Such property renders itself for an immediate use in improving Holm’s procedure by incorporating pairwise dependencies of P-values. The improved Holm’s procedure, as all stepdown multiple testing procedures, can also be naturally extended to group-sequential setting.
    • On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics

      Obradovic, Zoran; Vucetic, Slobodan; Souvenir, Richard M.; Kaplan, Avi (Temple University. Libraries, 2019)
      Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets.

      Lorenz, Martin, 1951-; Lorenz, Martin, 1951-; Walton, Chelsea; Dolgushev, Vasily; Riseborough, Peter (Temple University. Libraries, 2017)
      Representation theory is a field of study within abstract algebra that originated around the turn of the 19th century in the work of Frobenius on representations of finite groups. More recently, Hopf algebras -- a class of algebras that includes group algebras, enveloping algebras of Lie algebras, and many other interesting algebras that are often referred to under the collective name of ``quantum groups'' -- have come to the fore. This dissertation will discuss generalizations of certain results from group representation theory to the setting of Hopf algebras. Specifically, our focus is on the following two areas: Frobenius divisibility and Kaplansky's sixth conjecture, and the adjoint representation and the Chevalley property.