Status: In Progress
No comprehensive open source normed picture database exists to support parapsychology researchers in their experiments that require images. This large collaborative project with the University of Denver and the Institute of Love and Time aims to norm 1800 pictures on 18 characteristics using multiple judges. IRVA abstract. Presented at International Remote Viewing Association conference July 22, 2022 in Menlo Park, California.
Title: Content vs. Meaning in Images: Creating an open-sourced image database and target pool for RV research
Authors: Damon Abraham, Cedric Cannard, Julia Mossbridge, Helané Wahbeh
Abstract: It is not always clear what makes a good (or a bad) image for use in remote viewing (RV) research and applications. Many have speculated that beyond the contents of an image or the specific objects represented, certain psychologically meaningful subjective dimensions, such as its interestingness, numinosity, emotionality etc., may be predictive of RV success or may explain some instances of displacement effects. However, aside from this speculation, there have been few attempts to empirically validate which, if any, subjective dimensions may be most important in this regard. Likewise, there has not been a broad concerted effort to quantify a sizable pool of images across multiple candidate subjective dimensions to date.
In an ongoing collaboration between researchers from the University of Denver (DU), The Institute for Love and Time (TILT), and the Institute of Noetic Sciences (IONS), we are developing an open-sourced database of images for use in both parapsychological and traditional psychological research. This project aims to generate normative ratings for around 2000 unique images that vary across 18 separate subjective dimensions or classes. These classes include the following: Abstractness, Animacy, Awe, Conceptual Complexity, Embodiment, Emotional Valence, Emotional Intensity, Interestingness, Likelihood (Hypothetical Distance), Movement, Natural, Numinosity, Visual Perspective, Physical Distance, Sensory, Social, Temporal Distance, and Visual Complexity.
For each of these 18 classes and for every image, we are currently collecting subjective ratings from a participant pool consisting of students from DU, community volunteers, and paid Amazon Mechanical Turk (MTurk) workers. We have divided the classes into three separate counterbalanced orders, with each order consisting of six out of the 18 classes. One set of the rating task consists of each participant viewing 80 separate images sequentially, providing a subjective rating for six classes for every image viewed. The ratings are captured using a sliding numerical scale in which high (low) values indicate greater (less) endorsement of the specific quality being measured within the image. Participants can complete multiple sets (i.e., rating another set of 80 images on six classes). Numerous participants will provide these ratings for every image in the database. The ratings are then aggregated to form distributions with a mean rating level. Therefore, the output of the project’s first phase will consist of the normative mean ratings and distribution characteristics of the 18 classes for approximately 2000 images in total.