Bookbot

Statistik für Sozial- und Verhaltenswissenschaften

Diese Reihe befasst sich mit den quantitativen Methoden, die der Forschung in den Sozial- und Verhaltenswissenschaften zugrunde liegen. Sie bietet fundierte Analysen und praktische Anwendungen statistischer Techniken, die für Disziplinen wie Psychologie, Soziologie, Politikwissenschaft und Bildungswesen unerlässlich sind. Die Sammlung umfasst sowohl grundlegende Texte als auch fortgeschrittene Monografien und versorgt Forscher sowie Studierende mit wichtigen Werkzeugen zur Datenanalyse und -interpretation. Ziel ist es, statistische Theorie mit empirischen Erkenntnissen und realen Herausforderungen zu verbinden.

Living Standards Analytics
Linear Models for Optimal Test Design
Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition
Explanatory item response models
Missing Data and Small-Area Estimation
Statistics for lawyers

Empfohlene Lesereihenfolge

  • Designed to introduce the basics of mathematical probability and statistics useful to law students and practitioners, this second edition includes many new problems reflecting current developments in the law, and has been rewritten at a more elementary level. The book includes real-world case studies where statistical data has played a role.

    Statistics for lawyers
  • Missing Data and Small-Area Estimation

    Modern Analytical Equipment for the Survey Statistician

    • 360 Seiten
    • 13 Lesestunden

    The book focuses on the intersection of missing data and small-area estimation, stemming from the author's experiences as a C-pion Fellow. It emphasizes the importance of integrating academic and industrial statistics to enhance government statistics. The author reflects on the collaborative efforts during lectures and workshops, acknowledging the contributions of colleagues and the supportive environment at Massey University. This work highlights the need for closer ties between academia and industry to drive progress in statistical research and applications.

    Missing Data and Small-Area Estimation
  • This edited volume offers an integrated introduction to item response models, commonly utilized in psychology, education, and social sciences, through the lens of generalized linear and nonlinear mixed models. This new framework broadens the scope of item response models, highlighting their explanatory potential beyond traditional descriptive applications. The core idea is that item responses can be modeled based on various predictors, which may include characteristics of items, individuals, or their combinations, and can be either observed or latent, continuous or categorical. This approach generates a diverse array of models, encompassing existing item response models and introducing new ones, with particular emphasis on those featuring explanatory predictors, while also addressing descriptive models. The term "item responses" extends beyond conventional test data to include categorical data from repeated observations or longitudinal designs. The book begins with an introductory section followed by chapters detailing models for ordered-category data, multilevel models, differential item functioning, multidimensional models, local item dependency, and mixture models. It includes a chapter on statistical background and software, utilizing a unified notation approach and a consistent dataset for illustration. Computer commands from the SAS package, along with examples from other software, are provided for model estimation. The

    Explanatory item response models
  • This book focuses on projections and singular value decomposition (SVD), essential for multivariate analysis. It provides a comprehensive discussion of generalized inverse matrices and their relation to projections. The text explores these concepts through linear transformations in finite-dimensional vector spaces, benefiting researchers and students in various fields.

    Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition
  • The book delves into the evolution of psychometric methods over four decades, highlighting significant advancements in educational and psychological measurement. It discusses the emergence of criterion-referenced assessment, which prioritizes individual performance against defined knowledge and skills, and contrasts this with traditional norm-referenced approaches. Additionally, the introduction of item response theory (IRT) is examined, showcasing its advantages over classical test theory, particularly in enhancing measurement practices and addressing previous limitations in test development and evaluation.

    Linear Models for Optimal Test Design
  • The purpose of this book is to introduce, discuss, illustrate, and evaluate the colorful palette of analytical techniques that can be applied to the analysis of household survey data, with an emphasis on the innovations of the past decade or so. Most of the chapters begin by introducing a methodological or policy problem, to motivate the subsequent discussion of relevant methods. They then summarize the relevant techniques, and draw on examples – many of them from the authors’ own work – and aim to convey a sense of the potential, but also the strengths and weaknesses, of those techniques. This book is meant for graduate students in statistics, economics, policy analysis, and social sciences, especially, but certainly not exclusively, those interested in the challenges of economic development in the Third World. Additionally, the book will be useful to academics and practitioners who work closely with survey data. This is a book that can serve as a reference work, to be taken down from the shelf and perused from time to time.

    Living Standards Analytics
  • A critical yet constructive description of the rich analytical techniques and substantive applications that typify how statistical thinking has been applied at the RAND Corporation over the past two decades. Case studies of public policy problems are useful for teaching because they are familiar: almost everyone knows something abut health insurance, global warming, and capital punishment, to name but a few of the applications covered in this casebook. Each case study has a common format that describes the policy questions, the statistical questions, and the successful and the unsuccessful analytic strategies. Readers should be familiar with basic statistical concepts including sampling and regression. While designed for statistics courses in areas ranging from economics to health policy to the law at both the advanced undergraduate and graduate levels, empirical researchers and policy-makers will also find this casebook informative.

    Public policy and statistics
  • "Prove It With Figures" displays some of the tools of the social and statistical sciences that have been applied to the proof of facts in the courtroom and to the study of questions of legal importance. It explains how researchers can extract the most valuable and reliable data that can conveniently be made available, and how these efforts sometimes go awry. In the tradition of Zeisel's "Say It with Figures," a standard in the field of social statistics since 1947, it clarifies, in non-technical language, some of the basic problems common to all efforts to discern cause-and-effect relationships. Designed as a textbook for law students who seek an appreciation of the power and limits of empirical methods, the work also is a useful reference for lawyers, policymakers, and members of the public who would like to improve their critical understanding of the statistics presented to them. The many case histories include analyses of the death penalty, jury selection, employment discrimination, mass torts, and DNA profiling. Hans Zeisel was Professor of Law and Sociology Emeritus at the University of Chicago, where he pioneered the application of social science to the law. Earlier, he had a distinguished career in public opinion and market research. He has written on a wide variety of topics, ranging from research methodology and history to law enforcement, juries, and Sheakespeare. He was elected Fellow of the American Statistical Assoication and the American Association for the Advancement of Science, and in 1980 he was inducted into the Market Research Hall of Fame. David Kaye is Regents Professor at the Arizona State University, where he teaches evidence and related topics. An author of several law textbooks and treatises, his work also has appeared in journals of

    Prove it with figures
  • Generalizability theory offers an extensive conceptual framework and a powerful set of statistical procedures for characterizing and quantifying the fallibility of measurements. Robert Brennan, the author, has written the most comprehensive and up-to-date treatment of generalizability theory. The book provides a synthesis of those parts of the statistical literature that are directly applicable to generalizability theory. The principal intended audience is measurement practitioners and graduate students in the behavioral and social sciences, although a few examples and references are provided from other fields. Readers will benefit from some familiarity with classical test theory and analysis of variance, but the treatment of most topics does not presume specific background.

    Generalizability theory
  • This monograph presents methods for full comparative distributional analysis based on the relative distribution. This provides a general integrated framework for analysis, a graphical component that simplifies exploratory data analysis and display, a statistically valid basis for the development of hypothesis-driven summary measures, and the potential for decomposition - enabling the examination of complex hypotheses regarding the origins of distributional changes within and between groups. Written for data analysts and those interested in measurement, the text can also serve as a textbook for a course on distributional methods.

    Relative distribution methods in the social sciences
  • Bayesian Item Response Modeling

    Theory and Applications

    • 328 Seiten
    • 12 Lesestunden
    3,5(6)Abgeben

    Focusing on software implementations in S-plus and R, this volume is designed for measurement specialists and students interested in the Bayesian approach to modern test theory. It serves as both a practical handbook and an educational textbook, providing access to online software implementations for enhanced learning and instruction.

    Bayesian Item Response Modeling
  • Ordinal Data Modeling is a comprehensive treatment of ordinal data models from both likelihood and Bayesian perspectives. A unique feature of this text is its emphasis on applications. All models developed in the book are motivated by real datasets, and considerable attention is devoted to the description of diagnostic plots and residual analyses. Software and datasets used for all analyses described in the text are available on websites listed in the preface.

    Ordinal data modeling
  • This book emphasizes the practical side of computer-based testing and presents suggestions, information, and ideas for its actual implementation. It provides information that can be used to make informed decisions, including the type of computer-based test that should be administered, possible cost to examinees, examinee reactions to the test, scoring issues, computer mode effects, and many more.

    Practical considerations in computer based testing
  • Now available in paperback, this book is organized in a way that emphasizes both the theory and applications of the various variance estimating techniques. Results are often presented in the form of theorems; proofs are deleted when trivial or when a reference is readily available. It applies to large, complex surveys; and to provide an easy reference for the survey researcher who is faced with the problem of estimating variances for real survey data.

    Introduction to variance estimation
  • Expert testimony based on scientific evidence is increasingly scrutinized in the legal system, particularly following a trilogy of U.S. Supreme Court cases that require judges to evaluate the relevance and reliability of such testimony. In response, the Federal judiciary, alongside the American Association for the Advancement of Science, has launched a project to provide judges with access to expert guidance when needed. This focus on the interpretation of scientific evidence, especially probabilistic data, is also evident in England, Australia, and various European nations. A collection of articles by statisticians and legal scholars addresses the challenges associated with statistical evidence in court. Several pieces focus on DNA evidence, detailing the complexities of calculating the probability of a random individual's profile matching that of the evidence and interpreting these results accurately. Authors share their courtroom experiences, with some expressing disillusionment that led them to reduce their involvement. Additional articles explore the use of statistical evidence in cases of discrimination, product liability, environmental regulation, and sentencing fairness, highlighting how engagement in legal statistics has uncovered intriguing statistical challenges that warrant further investigation.

    Statistical science in the courtroom
  • This book will be an important reference for several groups: (a) statisticians and others interested in the theory behind equating methods and the use of model-based statistical methods for data smoothing in applied work; (b) practitioners who need to equate tests, including those with these responsibilities in testing companies, state testing agencies, and school districts; and (c) instructors in psychometric and measurement programs. The authors assume some familiarity with linear and equipercentile test equating, and with matrix algebra.

    The Kernel method of test equating
  • Test Equating, Scaling, and Linking

    Methods and Practices

    • 592 Seiten
    • 21 Lesestunden
    5,0(2)Abgeben

    The third edition of this esteemed guide delves into the methodology of data collection and the synthesis of exam scores. It provides comprehensive insights on equalizing, scaling, and moderating results, along with practical advice on designing, administering, and marking standardized tests. This revised edition enhances understanding of effective assessment practices, making it an invaluable resource for educators and assessment professionals.

    Test Equating, Scaling, and Linking
  • This book examines extensions of the Rasch model, one of the most researched and applied models in educational research and social science. This collection contains 22 chapters by some of the most renowned international experts in the field. They cover topics ranging from general model extensions to applications in fields as diverse as cognition, personality, organizational and sports psychology, and health sciences and education.

    Multivariate and Mixture Distribution Rasch Models