• Object Trackers Performance Evaluation and Improvement with Applications using High-order Tensor

      Ling, Haibin; Latecki, Longin; Tan, Chiu C.; Yan, Qimin (Temple University. Libraries, 2020)
      Visual tracking is one of the fundamental problems in computer vision. This topic has been a widely explored area attracting a great amount of research efforts. Over the decades, hundreds of visual tracking algorithms, or trackers in short, have been developed and a great packs of public datasets are available alongside. As the number of trackers grow, it then becomes a common problem how to evaluate who is a better tracker. Many metrics have been proposed together with tons of evaluation datasets. In my research work, we first make an application practice of tracking multiple objects in a restricted scene with very low frame rate. It has a unique challenge that the image quality is low and we cannot assume images are close together in a temporal space. We design a framework that utilize background subtraction and object detection, then we apply template matching algorithms to achieve the tracking by detection. While we are exploring the applications of tracking algorithm, we realize the problem when authors compare their proposed tracker with others, there is unavoidable subjective biases: it is non-trivial for the authors to optimize other trackers, while they can reasonably tune their own tracker to the best. Our assumption is based on that the authors will give a default setting to other trackers, hence the performances of other trackers are less biased. So we apply a leave-their-own-tracker-out strategy to weigh the performances of other different trackers. we derive four metrics to justify the results. Besides the biases in evaluation, the datasets we use as ground truth may not be perfect either. Because all of them are labeled by human annotators, they are prone to label errors, especially due to partial visibility and deformation. we demonstrate some human errors from existing datasets and propose smoothing technologies to detect and correct them. we use a two-step adaptive image alignment algorithm to find the canonical view of the video sequence. then use different techniques to smooth the trajectories at certain degrees. The results show it can slightly improve the trained model, but would overt if overcorrected. Once we have a clear understanding and reasonable approaches towards the visual tracking scenario, we apply the principles in multi-target tracking cases. To solve the problem, we formulate it into a multi-dimensional assignment problem, and build the motion information in a high-order tensor framework. We propose to solve it using rank-1 tensor approximation and use a tensor power iteration algorithm to efficiently obtain the solution. It can apply in pedestrian tracking, aerial video tracking, as well as curvalinear structure tracking in medical video. Furthermore, this proposed framework can also fit into the affinity measurement of multiple objects simultaneously. We propose the Multiway Histogram Intersection to obtain the similarities between histograms of more than two targets. With the solution of using tensor power iteration algorithm, we show it can be applied in a few multi-target tracking applications.
    • Objectivity and Autonomy in the Newsroom: A Field Approach

      Garrett, Paul B., 1968-; Jhala, Jayasinhji; Kitch, Carolyn L. (Temple University. Libraries, 2008)
      This dissertation provides a better understanding of how journalists attain their personal and occupational identities. In particular, I examine the origins and meanings of journalistic objectivity as well as the professional autonomy that is specific to journalism. Journalists understand objectivity as a worldview, value, ideal, and impossibility. A central question that remains is why the term objectivity has become highly devalued in journalistic discourse in the past 30 years, a puzzling development considered in light of evidence that "objectivity" remains important in American journalism. I use Bourdieu's notion of field to explore anthropological ways of looking at objectivity, for instance, viewing it as a practice that distinguishes journalists from other professionals as knowledge workers. Applying notions of field to the journalistic field through anthropological methods and perspective permits the linkage of microlevel perspectives to macrolevel social phenomena. The dissertation demonstrates how qualitative research on individuals and newsroom organizations can be connected to the field of journalism in the United States. Additionally, it offers insight into why journalists continue to embrace objectivity, even as they acknowledge its deficiencies as a journalistic goal.

      DuCette, Joseph P.; Cromley, Jennifer; Schifter, Catherine; Shapiro, Joan Poliner; Fullard, William (Temple University. Libraries, 2009)
      Exploration into teacher competency of various types has gone on for quite some time. An untapped resource regarding teacher expertise is that of the students' perceptions of teacher expertise, particularly the ability of students to identify the types of behaviors that expert and non-expert teachers exhibit in the classroom. The frequency and variety of expert behaviors in the high school classroom were investigated in this study. High school teachers (n = 25) were observed during regular class periods using the Teacher Behavior Checklist, a checklist of behaviors developed for this study from discussions with high school students, teachers, administrators, and existing teacher competency literature. Results suggest discrimination of expert and non-expert teachers similar to Berliner (2001). Agreement among students' perception of expertise, classroom observations, and the literature suggest that high school students are capable of accurately identifying expert and non-expert behaviors of teachers. Further, some data suggest that expert teachers draw from a narrower behavioral scheme and exhibit expert designated behaviors more often than do their non-expert colleagues. This study highlights the need to close the evaluative loop through the utilization of student perception.
    • Occupational Therapy Level II Fieldwork: Effectiveness in Preparing Students for Entry-Level Practice

      DuCette, Joseph P.; Kinnealey, Moya; Schifter, Catherine; Weiss, Donna (Donna F.); Fullard, William (Temple University. Libraries, 2009)
      Occupational therapy (OT) is a rehabilitation profession in which licensed therapists facilitate functional independence, to the greatest extent possible, of an individual with disabilities. Education for OT is at the Master’s level consisting of a two-year academic program followed by clinical Fieldwork II, a required 12-week internship under the mentorship of a licensed therapist with at least one year’s experience. In light of the fact that clinical fieldwork sites differ in size and resources, and clinical instructors may have only one year’s experience and no formal training in instruction, there is great variability in students’ clinical fieldwork experiences. The purpose of this study was to determine novice rehab OT’s perceptions of four key factors in clinical education: First, skill areas in which they felt most prepared; second, areas perceived as obstacles in adjustment to entry-level practice; third, essential elements of an ideal clinical learning environment; and fourth, the need for credentialing clinical instructors. Participants were 1-3 years post rehab fieldwork with first job in rehab. An online survey (N=45) and audiotaped interviews (N=9) were utilized to collect data on the perceptions of new OT’s on Fieldwork II experiences. Interviewees represented a convenience sample independent of survey participants. Most participants reported feeling prepared to perform basic clinical skills, communicate on interdisciplinary teams and seek mentorship in the workplace. Less proficiency was perceived in the areas of patient/family communication, and coping with reality shock (adjustment to real life practice). Over half of the participants felt that there should be some kind of mandatory credentialing for clinical instructors. There was consensus among OT’s regarding the ideal Fieldwork II setting which included well-trained instructors, availability for onsite learning and a well-equipped clinical site.
    • Occurrence and Evaluation of White Spot Lesions in Orthodontic Patients: A Pilot Study

      Sciote, James J.; Godel, Jeffrey H.; Tellez Merchán, Marisol (Temple University. Libraries, 2014)
      Orthodontic treatment may cause an increase in the rate of enamel decalcification on tooth surfaces, producing White Spot Lesions (WSL). Orthodontic patients are at a higher risk for decalcification because orthodontic appliances retain food debris which leads to increased plaque formation. Dental plaque, an oral biofilm formed by factors including genetics, diet, hygiene, and environment, contains acid producing bacterial strains with a predominance of Mutans Streptococcus (MS). MS and others metabolize oral carbohydrates during ingestion, the byproducts of which acidify the biofilm to begin a process of enamel decalcification and formation of WSL. This study tests if patients in orthodontic treatment at Temple University can be used as subjects for further longitudinal study of WSL risk factors. Twenty patients between the ages of ten to eighteen after three months or greater of treatment were enrolled to determine if duration of treatment, hygiene, sense of coherence, obesity, diet frequencies, age and gender correlated with development of WSL. Of these, age is positively correlated with the number of untreated decayed surfaces. WSL and plaque levels may negatively correlate with increased brushing frequency and duration, while flossing frequency demonstrated a statistically significant negative correlation. This population may be suitable for further study because of its high incidence of WSL (75%), however difficulty in enrollment and patient attrition necessitates that future studies be modified.
    • Ocean Acidification and the Cold-Water Coral Lophelia pertusa in the Gulf of Mexico

      Cordes, Erik E.; Kulathinal, Rob J.; Sanders, Robert W.; Tanaka, Jacqueline; Fisher, Charles R. (Charles Raymond) (Temple University. Libraries, 2013)
      Ocean acidification is the reduction in seawater pH due to the absorption of anthropogenic carbon dioxide by the oceans. Reductions in seawater pH can inhibit the precipitation of aragonite, a calcium carbonate mineral used by marine calcifiers such as corals. Lophelia pertusa is a cold-water coral that forms large reef structures which enhance local biodiversity on the seafloor, and is found commonly from 300-600 meters on hard substrata in the Gulf of Mexico. The present study sought to investigate the potential impacts of ocean acidification on L. pertusa in the Gulf of Mexico through combined field and laboratory analyses. A field component characterized the carbonate chemistry of L. pertusa habitats in the Gulf of Mexico, an important step in establishing a baseline from which future changes in seawater pH can be measured, in addition to collecting in situ data for the design and execution of perturbation experiments in the laboratory. A series of recirculating aquaria were designed and constructed for the present study, and support the maintenance and experimentation of live L. pertusa in the laboratory. Finally, experiments testing L. pertusa's mortality and growth responses to ocean acidification were conducted in the laboratory, which identified thresholds for calcification and a range of sensitivities to ocean acidification by individual genotype. The results of this study permit the monitoring of ongoing ocean acidification in the deep Gulf of Mexico, and show that ocean acidfication's impacts may not be consistent across individuals within populations of L. pertusa.
    • Of Roads and Revolutions: Peasants, Property, and the Politics of Development in La LIbertad, Chontales (1895-1995)

      Goode, Judith, 1939-; White, Sydney Davant; Walker, Kathy Le Mons; Patterson, Thomas C. (Thomas Carl), 1937- (Temple University. Libraries, 2010)
      This dissertation analyzes the political-economy of agrarian social relations and uneven development in La Libertad, Chontales, Nicaragua. It locates the development of agrarian structures and municipal politics at the interstices of local level processes and supra-local political-economic projects, i.e., an expanding world market, Nicaraguan nation-state and class formation, and U.S. imperialism. The formation and expansion of private property in land and the contested placement of municipal borders forms the primary locus for this analysis of changing agrarian relations. Over the course of the century explored in this dissertation, the uneven development of class and state power did not foster capitalist relations of production (i.e., increasing productivity based on new investment, development of the forces of production, proletarianization) and did not entail the disappearance of peasant producers; rather, peasant producers proliferated. Neither emerging from a pre-capitalist past nor forging a (classically) capitalist present, classes and communities were shaped through constant movement (e.g., waves of migration and population movements, upward and downward mobility) and structured by forms of accumulation rooted in extractive economic practices and forms of dependent-commercial capitalism on the one hand, and the politics of state - including municipal - formative dynamics on the other. The proliferation of peasant producers, both constrained and made possible by these processes, depended upon patriarchal relations (through which family labor was mobilized and landownership and use framed) and an expansive frontier (through which land pressure was relieved and farm fragmentation mitigated), although larger ranchers and landlords depended upon and benefited from these as well, albeit in different ways. The social relations among different classes and strata were contradictory, entailing forms of dependence, subordination, and exploitation as well as identification and affinity. In the context of the Sandinista revolution, these ties created the basis for a widely shared counterrevolutionary political stance across classes and strata while these class and strata distinctions conditioned the specificities and experiences of opposition.
    • Old Stories and New Visualizations: Digital Timelines as Public History Projects

      Bruggeman, Seth C., 1975-; Lowe, Hilary Iris; Dorman, Dana (Temple University. Libraries, 2015)
      This thesis explores the use and potential of digital timelines in public history projects. Digital timelines have become a popular and accessible ways for institutions and individuals to write history. The history of timelines indicates that people understand timelines as authoritative information visualizations because they represent concrete events in absolute time. The goals of public history often conflict with the linear, progressive nature of most timelines. This thesis reviews various digital timeline tools and uses The Print Center's Centennial Timeline as an in-depth case study that takes into account the multifaceted factors involved in creating a digital timeline. Digital history advocates support digital scholarship as an alternative to traditional narrative writing. This thesis illustrates that digital timelines can enable people to visualize history in unexpected ways, fostering new arguments and creative storytelling. Despite their potential, digital timelines often replicate the conventions of their paper counterparts because of the authoritative nature of the timeline form.
    • Older and Weaker or Older and Wiser: Exploring the Drivers of Performance Differences in Young and Old Adults on Experiential Learning Tasks in the Presence of Veridical Feedback

      Eisenstein, Eric; Morrin, Maureen; Mudambi, Susan; Ruvio, Ayalla (Temple University. Libraries, 2016)
      This dissertation proposes that while traditional cognitive psychology literature suggests that cognitive function decreases with age, these decreases are dependent on the types of testing being performed. While traditional cognitive tests of memory and processing speed show declines associated with age, this research suggests these declines are not robust across all types of learning. The coming pages present four studies aimed at furthering our understanding of how different age cohorts of consumers learn about products in active and complex marketplaces. Study one reveals an age advantage associated with learning experientially; an interesting and somewhat surprising result that warrants further investigation given the rapid rate at which populations are aging. The additional studies presented here begin that investigation through the application of several psychological theories. This research explores increased vigilance associated with the security motivation system (based on the principles of evolutionary psychology), the possible impact of mortality salience through the application of Terror Management Theory and a positive correlation between age and cognitive control, as possible explanations.

      Dragut, Eduard Constantin; Guo, Yuhong; Zhang, Kai; Shi, Justin Y.; Meng, Weiyi (Temple University. Libraries, 2020)
      Data plays the key role in almost every field of computer sciences, including knowledge graph field. The type of data varies across fields. For example, the data type of knowledge graph field is knowledge triples, while it is visual data like images and videos in computer vision field, and textual data like articles and news in natural language processing field. Data could not be utilized directly by machine learning models, thus data representation learning and feature design for various types of data are two critical tasks in many computer sciences fields. Researchers develop various models and frameworks to learn and extract features, and aim to represent information in defined embedding spaces. The classic models usually embed the data in a low-dimensional space, while neural network models are able to generate more meaningful and complex high-dimensional deep features in recent years. In knowledge graph field, almost every approach represent entities and relations in a low-dimensional space, because there are too many knowledge and triples in real-world. Recently a few approaches apply neural networks on knowledge graph learning. However, these models are only able to capture local and shallow features. We observe the following three important issues with the development of feature learning with neural networks. On one side, neural networks are not black boxes that work well in every case without specific design. There is still a lot of work to do about how to design and propose more powerful and robust neural networks for different types of data. On the other side, more studies about utilizing these representations and features in many applications are necessary. What's more, traditional representations and features work better in some domains, while deep representations and features perform better on other domains. Transfer learning is introduced to bridge the gap between domains and adapt various type of features for many tasks. In this dissertation, we aim to solve the above issues. For knowledge graph learning task, we propose a few important observations both theoretically and practically for current knowledge graph learning approaches, especially for knowledge graph learning based on Convolutional Neural Networks. Besides the work in knowledge graph field, we not only develop different types of feature and representation learning frameworks for various data types, but also develop effective transfer learning algorithm to utilize the features and representations. The obtained features and representations by neural networks are utilized successfully in multiple fields. Firstly, we analyze the current issues on knowledge graph learning models, and present eight observations for existing knowledge graph embedding approaches, especially for approaches based on Convolutional Neural Networks. Secondly, we proposed a novel unsupervised heterogeneous domain adaptation framework that could deal with features in various types. Multimedia features are able to be adapted, and the proposed algorithm could bridge the representation gap between the source and target domains. Thirdly, we propose a novel framework to learn and embed user comments and online news data in unit of sessions. We predict the article of interest for users with deep neural networks and attention models. Lastly, we design and analyze a large number of features to represent dynamics of user comments and news article. The features span a broad spectrum of facets including news article and comment contents, temporal dynamics, sentiment/linguistic features, and user behaviors. Our main insight is that the early dynamics from user comments contribute the most to an accurate prediction, while news article specific factors have surprisingly little influence.
    • On Generalized Solutions to Some Problems in Electromagnetism and Geometric Optics

      Gutiérrez, Cristian E., 1950-; Berhanu, Shiferaw; Mendoza, Gerardo A.; Strain, Robert M. (Temple University. Libraries, 2016)
      The Maxwell equations of electromagnetism form the foundation of classical electromagnetism, and are of interest to mathematicians, physicists, and engineers alike. The first part of this thesis concerns boundary value problems for the anisotropic Maxwell equations in Lipschitz domains. In this case, the material parameters that arise in the Maxwell system are matrix valued functions. Using methods from functional analysis, global in time solutions to initial boundary value problems with general nonzero boundary data and nonzero current density are obtained, only assuming the material parameters are bounded and measurable. This problem is motivated by an electromagnetic inverse problem, similar to the classical Calder\'on inverse problem in Electrical Impedance Tomography. The second part of this thesis deals with materials having negative refractive index. Materials which possess a negative refractive index were postulated by Veselago in 1968, and since 2001 physicists were able to construct these materials in the laboratory. The research on the behavior of these materials, called metamaterials, has been extremely active in recent years. We study here refraction problems in the setting of Negative Refractive Index Materials (NIMs). In particular, it is shown how to obtain weak solutions (defined similarly to Brenier solutions for the Monge-Amp\`ere equation) to these problems, both in the near and the far field. The far field problem can be treated using Optimal Transport techniques; as such, a fully nonlinear PDE of Monge-Amp\`ere type arises here.
    • On Group-Sequential Multiple Testing Controlling Familywise Error Rate

      Sarkar, S. K. (Sanat K.); Han, Xu; Zhao, Zhigen; Tang, Cheng Yong; Rom, Dror (Temple University. Libraries, 2015)
      The importance of multiplicity adjustment has gained wide recognition in modern scientific research. Without it, there will be too many spurious results and reproducibility becomes an issue; with it, if overtly conservative, discoveries will be made more difficult. In the current literature on repeated testing of multiple hypotheses, Bonferroni-based methods are still the main vehicle carrying the bulk of multiplicity adjustment. There is room for power improvement by suitably utilizing both hypothesis-wise and analysis- wise dependencies. This research will contribute to the development of a natural group-sequential extension of the classical stepwise multiple testing procedures, such as Dunnett’s stepdown and Hochberg’s step-up procedures. It is shown that the proposed group-sequential procedures strongly control the familywise error rate while being more powerful than the recently developed class of group-sequential Bonferroni-Holm’s procedures. Particularly in this research, a convexity property is discovered for the distribution of the maxima of pairwise null P-values with the underlying test statistics having distributions such as bivariate normal, t, Gamma, F, or Archimedean copulas. Such property renders itself for an immediate use in improving Holm’s procedure by incorporating pairwise dependencies of P-values. The improved Holm’s procedure, as all stepdown multiple testing procedures, can also be naturally extended to group-sequential setting.
    • On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics

      Obradovic, Zoran; Vucetic, Slobodan; Souvenir, Richard M.; Kaplan, Avi (Temple University. Libraries, 2019)
      Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets.

      Lorenz, Martin, 1951-; Lorenz, Martin, 1951-; Walton, Chelsea; Dolgushev, Vasily; Riseborough, Peter (Temple University. Libraries, 2017)
      Representation theory is a field of study within abstract algebra that originated around the turn of the 19th century in the work of Frobenius on representations of finite groups. More recently, Hopf algebras -- a class of algebras that includes group algebras, enveloping algebras of Lie algebras, and many other interesting algebras that are often referred to under the collective name of ``quantum groups'' -- have come to the fore. This dissertation will discuss generalizations of certain results from group representation theory to the setting of Hopf algebras. Specifically, our focus is on the following two areas: Frobenius divisibility and Kaplansky's sixth conjecture, and the adjoint representation and the Chevalley property.
    • On Sufficient Dimension Reduction via Asymmetric Least Squares

      Dong, Yuexiao; Tang, Cheng Yong; Lee, Kuang-Yao (Temple University. Libraries, 2021)
      Accompanying the advances in computer technology is an increase collection of high dimensional data in many scientific and social studies. Sufficient dimension reduction (SDR) is a statistical method that enable us to reduce the dimension ofpredictors without loss of regression information. In this dissertation, we introduce principal asymmetric least squares (PALS) as a unified framework for linear and nonlinear sufficient dimension reduction. Classical methods such as sliced inverse regression (Li, 1991) and principal support vector machines (Li, Artemiou and Li, 2011) often do not perform well in the presence of heteroscedastic error, while our proposal addresses this limitation by synthesizing different expectile levels. Through extensive numerical studies, we demonstrate the superior performance of PALS in terms of both computation time and estimation accuracy. For the asymptotic analysis of PALS for linear sufficient dimension reduction, we develop new tools to compute the derivative of an expectation of a non-Lipschitz function. PALS is not designed to handle symmetric link function between the response and the predictors. As a remedy, we develop expectile-assisted inverse regression estimation (EA-IRE) as a unified framework for moment-based inverse regression. We propose to first estimate the expectiles through kernel expectile regression, and then carry out dimension reduction based on random projections of the regression expectiles. Several popular inverse regression methods in the literature including slice inverse regression, slice average variance estimation, and directional regression are extended under this general framework. The proposed expectile-assisted methods outperform existing moment-based dimension reduction methods in both numerical studies and an analysis of the Big Mac data.
    • On the Design and Analysis of Cloud Data Center Network Architectures

      Wu, Jie, 1961-; Shi, Justin Y.; Wu, Jie, 1961-; Shi, Yuan; Ji, Bo, 1982- (Temple University. Libraries, 2016)
      Cloud computing has become pervasive in the IT world, as well as in our daily lives. The underlying infrastructures for cloud computing are the cloud data centers. The Data Center Network (DCN) defines what networking devices are used and how different devices are interconnected in a cloud data center; thus, it has great impacts on the total cost, performances, and power consumption of the entire data center. Conventional DCNs use tree-based architectures, where a limited number of high-end switches and high-bandwidth links are used at the core and aggregation levels to provide required bandwidth capacity. A conventional DCN often suffers from high expenses and low fault-tolerance, because high-end switches are expensive and a failure of such a high-end switch will result in disastrous consequences in the network. To avoid the problems and drawbacks in conventional DCNs, recent works adopt an important design principle: using Commodity-Off-The-Shelf (COTS) cheap switches to scale out data centers to large sizes, instead of using high-end switches to scale up data centers. Based on this scale-out principle, a large number of novel DCN architectures have been proposed. These DCN architectures are classified into two categories: switch-centric and server-centric DCN architectures. In both switch-centric and server-centric architectures, COTS switches are used to scale out the network to a large size. In switch-centric DCNs, routing intelligence is placed on switches; each server usually uses only one port of the Network Interface Card (NIC) to connect to the switches. In server-centric DCNs, switches are only used as dummy cross-bars; servers in the network serve as both computation nodes and packet forwarding nodes that connect switches and other servers, and routing intelligence is placed on servers, where multiple NIC ports may be used. This dissertation considers two fundamental problems in designing DCN architectures using the scale-out principle. The first problem considers how to maximize the total number of dual-port servers in a server-centric DCN given a network diameter constraint. Motivated by the Moore Bound, which provides the upper bound on the number of nodes in a traditional graph given a node degree and diameter, we give an upper bound on the maximum number of dual-port servers in a DCN, given a network diameter constraint and a switch port number. Then, we propose three novel DCN architectures, SWCube, SWKautz, and SWdBruijn, whose numbers of servers are close to the upper bound, and are larger than existing DCN architectures in most cases. SWCube is based on the generalized hypercube. SWCube accommodates a comparable number of servers to that of DPillar, which is the largest existing one prior to our work. SWKautz and SWdBruijn are based on the Kautz graph and the de Bruijn graph, respectively. They always accommodate more servers than DPillar. We investigate various properties of SWCube, SWKautz, and SWdBruijn; we also compare them with various existing DCN architectures and demonstrate their advantages over existing architectures. The second problem focuses on the tradeoffs between network performances and power consumption in designing DCN architectures. We have two motivations for our work. The first one is that most existing works take extreme designs in terms of improving network performances and reducing the power consumption. Some DCNs use too many networking devices to improve the performances; their power consumption is very high. Other DCNs use two few networking devices, and their performances are very poor. We are interested in exploring the quantitative tradeoffs between network performances and power consumption in designing DCN architectures. The second motivation is that there do not exist important unified performance and power consumption metrics for general DCNs. Thus, we propose two important unified performance and power consumption metrics. Then, we propose three novel DCN architectures that achieve important tradeoff points in the design spectrum: FCell, FSquare, and FRectangle. Besides, we find that in all these three new architectures, routing intelligence can be placed on both servers and switches; thus they enjoy the advantages of both switch-centric and server-centric architectures, and can be regarded as a new category of DCN architectures, the dual-centric DCN architectures. We also investigate various other properties for our proposed architectures and verify that they are excellent candidates for practical cloud data centers.
    • On the Line: Lighting A Chorus Line

      Hoey, John S. (Temple University. Libraries, 2012)
      This thesis examines, details, and evaluates the process used while executing the lighting design for a production of A Chorus Line, produced by Temple University's Department of Theater. I will discuss each part of the design process as well as the technical rehearsal process and evaluate the choices made.
    • On the Poetics of Nonlinear Time: Dallapiccola's Canti di Liberazione

      Folio, Cynthia; Klein, Michael Leslie; Wright, Maurice, 1949-; Alegant, Brian, 1960- (Temple University. Libraries, 2016)
      The final piece of Dallapiccola’s “protest triptych” responding to Mussolinian fascism, Canti di Liberazione (1955) shows Dallapiccola’s abiding interest in author James Joyce’s work through its literary-inspired “simultaneity” and compositional strategies that suggest mixed temporalities (diverse temporal modes). This connection has significant implications on both the temporal and narrative forces at play within the work. Incorporating the work of philosophers such as Bergson and Adorno, I situate nonlinearity within the context of twentieth-century cultural life. Following Kern, I discuss the juxtaposition of distinct temporalities (simultaneity) in work such as Joyce’s Ulysses, which Dallapiccola adored, in addition to providing an overview of simultaneity in music. Next, I draw from Kramer and Reiner in examining the manifestations of linearity and nonlinearity in music. I explain how intertextuality has nonlinear implications and invites hermeneutic interpretation. Following Brown, I identify different types of symbolism and quotation in the work. Motivic elements such as the BACH cryptogram, musical references to Canti di Prigionia and Il Prigioniero, and structural symbolism such as palindromic gestures, are all crucial components of Liberazione’s unique temporal nexus. I explain how Dallapiccola’s intertextuality and compositional devices (such as retrograde, cross-partitioning, motivic recurrence, and rhythmic figuration) parallel Joyce’s techniques in Ulysses. Finally, I present a temporal analysis of Liberazione. Drawing from Kramer, I show how characteristics such as stepwise pitch relationships, homophony, and triadic gestures suggest linearity, while pedal points, “floating rhythm,” proportions, and polarity present nonlinearity. Moreover, I demonstrate how mixed temporalities (more than one temporal mode) operate within the work, and how linearity and nonlinearity exist at different structural levels. I explain how the recurring 01 dyad—a motivic minor second or major seventh—also manifests in the background stepwise descent of the work (F# to F), subverting the narrative transcendence of the conclusion. Ultimately, I categorize the work as an example of Kramer’s multiply-directed linear time, given its structural pitch connections and goal-directed teleology.

      Obradovic, Zoran; Obradovic, Zoran; Vucetic, Slobodan; Dragut, Eduard Constantin; Zhao, Zhigen (Temple University. Libraries, 2019)
      In various domains, such as information retrieval, earth science, remote sensing and social network, vast amounts of data can be viewed as attributed graphs as they are associated with attributes which describe the property of data and structure which reflects the inter-dependency among variables in the data. Given the broad coverage and the unique representation of attributed graphs, many studies with a focus on predictive modeling have been conducted. For example, node prediction aims at predicting the attributes of nodes; link prediction aims at predicting the graph structure; graph prediction aims at predicting the attributes from the entire graph. To provide better predictive modeling, we need to gain deep insights from the principle elements of the attributed graph. In this thesis, we explore answers to three open questions: (1) how to discover the structure of the graph efficiently? (2) how to find a compact and lossless representation of the attributes of the graph? (3) how to exploit the temporal contexts exhibited in the graph? For structure learning, we first propose a structure learning method which is capable of modeling the nonlinear relationship between attributes and target variables. The method is more effective than alternative approaches which are without nonlinear modeling or structure learning on the task of graph regression. It however suffers from the high computational cost brought from the structure learning. To address this limitation, we then propose a conditional dependency network which can discover the graph structure in a distributed manner. The experimental results suggest that this method is much more efficient than other methods while being comparable in terms of effectiveness. For representation learning, we introduced a Structure-Aware Intrinsic Representation Learning model. Different from existing methods which only focus on learning the compact representation of the target space of the attributed graph. Our method can jointly learn lower dimensional embeddings of the target space and feature space via structure-aware graph abstraction and feature-aware target embedding learning. The results indicate that the embedding produced from the proposed method is better than the ones from alternative state-of-the-art embedding learning methods across all experimental settings. For temporal modeling, we introduced a time-aware neural attentive model to capture the temporal dynamics exhibited in session-based news recommendation, in which the user's sequential behaviors are attributed graphs with chain structure and temporal contexts as attributes. The unique temporal dynamics specific to news include: readers' interests shift over time, readers comment irregularly on articles, and articles are perishable items with limited lifespans. The result demonstrates the effectiveness of our method against a number of state-of-the-art methods on several real-world news datasets.
    • On the Question of the Human: A General Economy of Contemporary Tastes

      O'Hara, Daniel T., 1948-; Singer, Alan, 1948-; Lee, Sue-Im, 1969- (Temple University. Libraries, 2013)
      In the latter half of the 20th-century and into the 21st, William Burroughs, Samuel Delany, and bioartists such as Oron Catts, Orlan, and Stelarc have all attempted to create works which respond to the increasing biopoliticization of contemporary society. The biopolitics of today seek to regularize life and structure it according to the imperatives of economic thought, a process by which the human becomes the Foucauldian homo oeconomicus. This restricted logic of biopolitics desperately tries to cover the explosive excess of the world today, what Bataille calls general economy. The artists under consideration in this work attempt to uncover this state of excess. While they are typically seen as exploring fantastic realms of the transgressive or, in the case of bioartists, attempting to emulate science fiction, in fact it is their realism which provokes. These artists reveal the heterological body, that which cannot be contained or described by the biopolitical regime. In so doing, they rewrite our standards of taste and point the way to understandings of the human that have been otherwise unavailable to us. William Burroughs in Naked Lunch highlights the manipulability of affect in contemporary society through the reduction of the human to bare life. He uses the figure of flesh/meat as a way of depicting the heterogeneous body and to generate a counter-affect, or free-floating affect, which unlike typical affect, is not worked up into emotion. Samuel Delany, too, describes the heterogeneous or destabilized body in the heterotopia of his novel Dhalgren. While Burroughs is unable or unwilling to gesture towards the potentially radical implications of the heterogeneous body, Delany proposes a new model of community that rests upon the revelation of the heterogeneous body, a community which acts as one informed by an affirmative biopolitics. Bioart, a somewhat vexed genre of art, attempts to construct artworks that both utilize and critique new science and technology of the body. The life sciences are complicit in the rise of the biopolitical state and further the view of the human as constrained by its material substrate. Fetishistic bioart problematically reproduces a fascination with the life sciences and advanced technology. However, the bioart which I call sacred has a demystifying effect and attempts to use the knowledge gained by the life sciences to expand our understanding of the human, going beyond the bounds of that very knowledge itself.