An increasing number of today s children and adolescents grow up in single parent and blended families. Many studies focus on the negative consequences of divorce on adolescent identity development. This empirical study examines the impact of family structure and functioning on identity development in its various domains. Findings revealed that young college students from non-original families seemed less socially integrated than others, and had fewer individuals to provide them with identity relevant information. Cognitive openness was the best predictor for identity exploration in the ideological and interpersonal domains, while the best predictor for commitment was the perceived importance of a particular identity domain. Family structure revealed one small direct effect on predicting identity commitment in the interpersonal domain. This book provides insight to those seeking to understand the complexity of identity formation in the light of current shifts in family structure, and should be useful to psychologists, parent educators, social workers, and other professionals interested in families, adolescents, and young adults.
Solving partial differential equations (PDEs) is a fundamental challenge in many application domains in industry and academia alike. With increasingly large problems, efficient and highly scalable implementations become more and more crucial. Today, facing this challenge is more difficult than ever due to the increasingly heterogeneous hardware landscape. One promising approach is developing domain‐specific languages (DSLs) for a set of applications. Using code generation techniques then allows targeting a range of hardware platforms while concurrently applying domain‐specific optimizations in an automated fashion. The present work aims to further the state of the art in this field. As domain, we choose PDE solvers and, in particular, those from the group of geometric multigrid methods. To avoid having a focus too broad, we restrict ourselves to methods working on structured and patch‐structured grids. We face the challenge of handling a domain as complex as ours, while providing different abstractions for diverse user groups, by splitting our external DSL ExaSlang into multiple layers, each specifying different aspects of the final application. Layer 1 is designed to resemble LaTeX and allows inputting continuous equations and functions. Their discretization is expressed on layer 2. It is complemented by algorithmic components which can be implemented in a Matlab‐like syntax on layer 3. All information provided to this point is summarized on layer 4, enriched with particulars about data structures and the employed parallelization. Additionally, we support automated progression between the different layers. All ExaSlang input is processed by our jointly developed Scala code generation framework to ultimately emit C++ code. We particularly focus on how to generate applications parallelized with, e.g., MPI and OpenMP that are able to run on workstations and large‐scale cluster alike.We showcase the applicability of our approach by implementing simple test problems, like Poisson’s equation, as well as relevant applications from the field of computational fluid dynamics (CFD). In particular, we implement scalable solvers for the Stokes, Navier‐Stokes and shallow water equations (SWE) discretized using finite differences (FD) and finite volumes (FV). For the case of Navier‐Stokes, we also extend our implementation towards non‐uniform grids, thereby enabling static mesh refinement, and advanced effects such as the simulated fluid being non‐Newtonian and non‐isothermal.
Aims and Scope Patients are more empowered to shape their own health care today than ever before. Health information technologies are creating new opportunities for patients and families to participate actively in their care, manage their medical problems and improve communication with their healthcare providers. Moreover, health information technologies are enabling healthcare providers to partner with their patients in a bold effort to optimize quality of care, improve health outcomes and transform the healthcare system on the macro-level.In this book, leading figures discuss the existing needs, challenges and opportunities for improving patient engagement and empowerment through health information technology, mapping out what has been accomplished and what work remains to truly transform the care we deliver and engage patients in their care. Policymakers, healthcare providers and administrators, consultants and industry managers, researchers and students and, not least, patients and their family members should all find value in this book."In the exciting period that lies just ahead, more will be needed than simply connecting patients to clinicians, and clinicians to each other. The health care systems that will be most effective in meeting patients' needs will be those that can actually design their 'human wares' around that purpose. This book provides deep insight into how information technology can and will support that redesign." Thomas H. Lee , MD, MSc, Chief Medical Officer, Press Ganey Associates, Professor of Medicine, Harvard Medical School and Professor of Health Policy and Management, Harvard School of Public HealthThe Editors: Drs. Maria Adela Grando, Ronen Rozenblum and David W. Bates are widely recognized professors, researchers and experts in the domain of health information technology, patient engagement and empowerment. Their research, lectures and contributions in these domains have been recognized nationally and internationally. Dr. Grando is affiliated with Arizona State University and the Mayo Clinic, and Drs. Rozenblum and Bates are affiliated with Brigham and Women's Hospital and Harvard University.
Information and Communication Technologies provide for a long time already the backbone of telecommunication networks, such that communication services represent an elementary foundation of today’s globally connected economy. The telecommunication landscape has experienced dramatic transformations through the convergence of the Telecom and the Internet worlds. The previously closed telecommunication domain is currently transforming itself through the so-called NGN evolution into a highly dynamic multiservice infrastructure, supporting rich multimedia applications, as well as providing comprehensive support for various access technologies.The control layer of such NGNs is then of paramount importance, as representing the convergent mediator between access and services. The use and the optimization of the IP-Multimedia Subsystem (IMS) was researched and considered in this domain for many years now, such that today it represents the world-wide recognized control platform for fixed and mobile NGNs.Research on protocols and services for such NGN architectures, due to the convergence of technologies, applications and business models, as well as for enabling highly dynamic and short innovation cycles, is highly complex and requires early access to vendor independent - yet close to real life systems - validation environments, the so-called open technology test-beds.The present thesis describes the extensive research of the author over the last nine years in the field of open NGN test-beds. It focuses on the design, development and deployment of the Open Source IMS Core project, which represents since years the foundation of numerous NGN test-beds and countless NGN Research & Development projects in the academia as well as the industry domain around the globe. A major emphasis is given for ensuring flexibility, performance, reference functionality and inter-operability, as well as satisfying elementary design principles of such test-bed toolkits.The study also describes and evaluates the use of Open Source principles, highlighting the advantages of using it in regard to the creation, impact and sustainability of a global OpenIMSCore research community.Moreover, the work documents that the essential design principles and methodology employed can be reused in a generic way to create test-bed toolkits in other technology domains. This is shown by introducing the OpenEPC project, which provides for seamless integration of different mobile broadband technologies.
A chatbot is expected to be capable of supporting a cohesive and coherent conversation and be knowledgeable, which makes it one of the most complex intelligent systems being designed nowadays. Designers have to learn to combine intuitive, explainable language understanding and reasoning approaches with high-performance statistical and deep learning technologies. Today, there are two popular paradigms for chatbot construction:1. Build a bot platform with universal NLP and ML capabilities so that a bot developer for a particular enterprise, not being an expert, can populate it with training data,2. Accumulate a huge set of training dialogue data, feed it to a deep learning network and expect the trained chatbot to automatically learn "how to chat".Although these two approaches are reported to imitate some intelligent dialogues, both of them are unsuitable for enterprise chatbots, being unreliable and too brittle.The latter approach is based on a belief that some learning miracle will happen and a chatbot will start functioning without a thorough feature and domain engineering by an expert and interpretable dialogue management algorithms.Enterprise high-performance chatbots with extensive domain knowledge require a mix of statistical, inductive, deep machine learning and learning from the web, syntactic, semantic and discourse NLP, ontology-based reasoning and a state machine to control a dialogue. This book will provide a comprehensive source of algorithms and architectures for building chatbots for various domains based on the recent trends in computational linguistics and machine learning. The foci of this book are applications of discourse analysis in text relevant assessment, dialogue management and content generation, which help to overcome the limitations of platform-based and data driven-based approaches.Supplementary material and code is available at https://github.com/bgalitsky/relevance-based-on-parse-trees
This book presents recent research in the field of reuse and integration, and will help researchers and practitioners alike to understand how they can implement reuse in different stages of software development and in various domains, from robotics and security authentication to environmental issues. Indeed, reuse is not only confined to reusing code, it can be included in every software development step. The challenge today is more about adapting solutions from one language to another, or from one domain to another. The relative validation of the reused artifacts in their new environment is also necessary, at time even critical.The book includes high-quality research papers on these and many other aspects, written by experts in information reuse and integration, who cover the latest advances in the field. Their contributions are extended versions of the best papers presented at the IEEE International Conference on Information Reuse and Integration (IRI) and IEEE International Workshop on Formal Methods Integration (FMI), which were held in San Diego in August 2017.
Item response theory (IRT) has moved beyond the confines of educational measurement into assessment domains such as personality, psychopathology, and patient-reported outcomes. Classic and emerging IRT methods and applications that are revolutionizing psychological measurement, particularly for health assessments used to demonstrate treatment effectiveness, are reviewed in this new volume. World renowned contributors present the latest research and methodologies about these models along with their applications and related challenges. Examples using real data, some from NIH-PROMIS, show how to apply these models in actual research situations. Chapters review fundamental issues of IRT, modern estimation methods, testing assumptions, evaluating fit, item banking, scoring in multidimensional models, and advanced IRT methods. New multidimensional models are provided along with suggestions for deciding among the family of IRT models available. Each chapter provides an introduction, describes state-of-the art research methods, demonstrates an application, and provides a summary. The book addresses the most critical IRT conceptual and statistical issues confronting researchers and advanced students in psychology, education, and medicine today. Although the chapters highlight health outcomes data the issues addressed are relevant to any content domain. The book addresses: IRT models applied to non-educational data especially patient reported outcomes Differences between cognitive and non-cognitive constructs and the challenges these bring to modeling. The application of multidimensional IRT models designed to capture typical performance data. Cutting-edge methods for deriving a single latent dimension from multidimensional data A new model designed for the measurement of constructs that are defined on one end of a continuum such as substance abuse Scoring individuals under different multidimensional IRT models and item banking for patient-reported health outcomes How to evaluate measurement invariance, diagnose problems with response categories, and assess growth and change. Part 1 reviews fundamental topics such as assumption testing, parameter estimation, and the assessment of model and person fit. New, emerging, and classic IRT models including modeling multidimensional data and the use of new IRT models in typical performance measurement contexts are examined in Part 2. Part 3 reviews the major applications of IRT models such as scoring, item banking for patient-reported health outcomes, evaluating measurement invariance, linking scales to a common metric, and measuring growth and change. The book concludes with a look at future IRT applications in health outcomes measurement. The book summarizes the latest advances and critiques foundational topics such a multidimensionality, assessment of fit, handling non-normality, as well as applied topics such as differential item functioning and multidimensional linking. Intended for researchers, advanced students, and practitioners in psychology, education, and medicine interested in applying IRT methods, this book also serves as a text in advanced graduate courses on IRT or measurement. Familiarity with factor analysis, latent variables, IRT, and basic measurement theory is assumed.
Presenting comprehensive and up-to-date state of the domain of highly strained hydrocarbons with unusual spatial structure, an experienced editor and top authors cover the whole range of these important molecules from [1.1.1]propellane to fullerenes and nanotubes. The necessity of studies in this area encompassing sometimes exotic molecules is discussed in detail showing their importance for basic science and practical applications. The fact that the latter cannot mostly be foreseen is amply documented. Even not long ago, studying such molecules was an elitist activity, few synthetic chemists succeeded in their syntheses. Today, the field has broadened in view of emerging practical application and fullerenes and nanotubes are one of the most vividly developing domains. Chapters include both experimental and theoretical studies. The former cover syntheses and unusual physicochemical properties related to the strain and untypical geometry of the molecules under scrutiny. The latter show importance of model calculations which help precise basic ideas of chemistry, such as the chemical bond, on the one hand, and are used to propose novel plausible synthetic targets, on the other. The monograph aims not only at PhD students and newcomers in the field who seek for an introduction to this area but also at the specialists who want to obtain a broader perspective of this domain and make use of a comprehensive review of literature.
A chatbot is expected to be capable of supporting a cohesive and coherent conversation and be knowledgeable, which makes it one of the most complex intelligent systems being designed nowadays. Designers have to learn to combine intuitive, explainable language understanding and reasoning approaches with high-performance statistical and deep learning technologies. Today, there are two popular paradigms for chatbot construction: 1. Build a bot platform with universal NLP and ML capabilities so that a bot developer for a particular enterprise, not being an expert, can populate it with training data; 2. Accumulate a huge set of training dialogue data, feed it to a deep learning network and expect the trained chatbot to automatically learn &#8220;how to chat&#8221;. Although these two approaches are reported to imitate some intelligent dialogues, both of them are unsuitable for enterprise chatbots, being unreliable and too brittle. The latter approach is based on a belief that some learning miracle will happen and a chatbot will start functioning without a thorough feature and domain engineering by an expert and interpretable dialogue management algorithms. Enterprise high-performance chatbots with extensive domain knowledge require a mix of statistical, inductive, deep machine learning and learning from the web, syntactic, semantic and discourse NLP, ontology-based reasoning and a state machine to control a dialogue. This book will provide a comprehensive source of algorithms and architectures for building chatbots for various domains based on the recent trends in computational linguistics and machine learning. The foci of this book are applications of discourse analysis in text relevant assessment, dialogue management and content generation, which help to overcome the limitations of platform-based and data driven-based approaches. Supplementary material and code is available at https://github.com/bgalitsky/relevance-based-on-parse-trees