HeroImage mobileHeroImage

ABOUT US

Our Collective Consciousness Can Change the Physical World

imageWithGraphicLine

INTRODUCTION

We're on a mission to understand the nature of our collective consciousness, its impact on our everyday life, and the ways we can use it create a better world.

Developed and maintained by the donor-funded HeartMath Institute, the Global Consciousness Project 2.0 is collaborating with scientists and engineers around the world to research the interconnectivity between human consciousness and the physical world.

about

HISTORY

The donor-funded HeartMath Institute was founded in 1991 and is the home of GCP 2.0. It builds upon the original Global Consciousness Project (GCP 1) that was created in 1997 by Dr. Roger Nelson and a group of volunteers. They sought to detect, quantify, and study human consciousness on a deeper level.

The team distributed physical random number generators on a global scale and measured the impact that large-scale, synchronized human emotion and focus of attention had on the output.

Can we tip the scales using a global network of RNGs to influence the outcome?

Our hypothesis: Yes

GraphicImage
about
HISTORY HIGHLIGHTS

By 2015, the GCP 1 Team had analyzed 500 pre-registered world events as well as exploring hundreds of others, from love and compassion meditations to tragedies and natural disasters.

The data showed with over trillion to one odds that when people experience a collective emotional response, there is a tangible effect on our physical world.

Today, we’re expanding our network of improved RNG devices across the world to further examine the real-world effects of human consciousness.

HISTORY HIGHLIGHTS

By 2015, the GCP 1 Team had analyzed 500 pre-registered world events as well as exploring hundreds of others, from love and compassion meditations to tragedies and natural disasters.

The data showed with over trillion to one odds that when people experience a collective emotional response, there is a tangible effect on our physical world.

Today, we’re expanding our network of improved RNG devices across the world to further examine the real-world effects of human consciousness.

about

HOW WE DO IT

The Global Consciousness Project 2.0 uses a global network of next-generation random number generators (NextGen RNGs), which are physical electronic devices that detect and quantify important aspects of human consciousness. 

imageWithGraphicLine
about

We measure the effects of human consciousness via a globally distributed network of physical devices that produce random numbers. These devices are called random number generators (RNGs). Our hypothesis is that shared consciousness can cause the network to stop behaving in a random fashion. This will occur either when large numbers of people share a focused attention towards the same thing, such as a global event that draws out compassion, or smaller numbers of people are in a more coherent state and hold a collective intention. In other words, our collective consciousness can change the physical world.

This effect is not limited to a network of unusual devices. We understand the devices to be detectors for a much broader effect. If human consciousness can affect these electronic devices, then wouldn't it affect other physical systems? Other experiments, such as those using organic RNGs, suggest that these effects extend to a broader range, including people, water, plants, trees, weather, and nature in general. The effects can be healing or harmful depending on the intention. However, consciousness effects are not readily observed in all objects because there are other competing influences on their behavior. That is why we use RNGs to detect this effect, as they are free from other influences.

The original Global Consciousness Project (now called GCP 1) was created in 1997 at Princeton University by Dr. Roger Nelson and a group of researchers working at the boundary areas of physics and consciousness. We are honored that in 2020, Dr Nelson asked the HeartMath Institute to become the new home base for the Global Consciousness Project. Over the course of 20+ years, many events were analyzed, including celebrations of New Years, large-scale meditations and religious events and shocking events like terrorist attacks and natural tragedies. The composite statistic for the project shows a 7-sigma departure from expectation, indicating a probability on the order of 1 in a trillion that the correlation of the GCP data with global events is merely a chance fluctuation. The evidence clearly suggests that focused emotional energy and attention can interact with and affect the physical world.

Expanding on the groundbreaking findings of GCP 1, the GCP 2.0 network is much larger. We designed a new device with 4 RNGs per device and plan to build 1000 of these for a total of 4000 RNGs, the largest RNG network on the planet at the time of its design. This is more than 50 times as many RNGs as GCP 1 at its peak of about 60 RNGs. It also uses the latest technology and cryptographic standards for randomness. We are tracking signals at every stage of the process including analysis of the quantum noise sources used in the process of producing a random number, so that we can now dig into the effect of consciousness on the underlying electrons and voltages. Finally, due to HeartMath’s extensive collaborations and network of citizen scientists and certified trainers, GCP 2.0 has a broader reach and an explicit mission to engage as many people globally as possible.

There are several advantages to this. It is an international collaboration of citizen scientists all across the planet, encouraging global engagement in the project and feeding the consciousness fields with more coherent emotions such as love and compassion. Additionally, it has been demonstrated (see figure) that having more devices in the network may result in a more sensitive detection system, but more importantly a larger network will allow us to examine topological effects in how consciousness-related effects distribute around the world when localized events take place. This should allow us to show unprecedented striking evidence of the effects of human consciousness on the physical world.

The more devices there are, the more sensitive the network is to consciousness effects

The RNGs are designed to generate truly random numbers. We have cleaned (“whitened”) out any other influences or biases coming from environmental effects such as the temperature of the room. That way, when we do see non-random effects, we attribute it to unconventional causes like collective consciousness. For more details on how we ensure the numbers are random, see RNG technology section.

One approach is to generate hypotheses before running our analysis, which is the standard way to perform rigorous statistical testing of an idea. We choose events to analyze based on our understanding of collective consciousness effects, adding them to our list called the “hypothesis registry.” Then, if we run the analysis and find non-random results, we have validated our hypothesis; we know our reasoning was correct. We have to make sure not to “cheat”: once we choose an event and analyze it, we must keep the result even if it is random and does not prove our point. We also test against controls, noticing that if we just choose events at random without looking for collective consciousness, the analysis yields randomness and nothing unusual.

Another approach is not to involve an analyst’s choice at all, but to observe correlations between the network output and other measures of collective consciousness, such as sentiment measured from news sources. By correlating to several such measures, we gain confidence in consciousness as the source.

Although there are multiple ways to measure this behavior, typically we perform a network variance (NetVar) calculation. This measure adds together the correlated behavior occurring across devices in the network. Significant cross-correlations occur during times of shared human attention and emotionality (consciousness). Each RNG generates many random numbers every second, to which the following procedure is applied:

  1. The random numbers in general follow a normal (i.e. “bell curve”) distribution and are standardized into Z-scores. This is typical in statistics to simplify the math, and this Z-score highlights how much each number generated by an RNG strays from what we expected from pure randomness; it does not change the results.
  2. All of the Z-scores within and across all of the devices in the network (or a subgroup) are added together via a Stouffer sum (i.e. divide the sum by the square root of the number of Z-scores being summed) to yield the network Z-score. In other words, we add together the “unexpectedness” of each number generated by an RNG (i.e. how far from random it is) to get a total “unexpectedness” for the network.
  3. This score is squared to provide the NetVar per second. This metric highlights correlations across different RNGs by effectively multiplying their Z-scores together. It is here that we find the unexpected behavior driven by global consciousness. We know this behavior is significant because we compare it to the chi squared distribution (what is this? - what is done?), which shows us the expected behavior for purely random numbers. This distribution describes the square of random numbers from a normal distribution. When the NetVar is improbably high or low, it falls outside the confidence intervals of this distribution. These numbers are not just random but influenced by collective consciousness.
  4. Subtract one from the NetVar so that it is centered at zero for easier chart plotting.
  5. This is aggregated across the period of time during which there is coherent consciousness. Unusual behavior is tracked by divergences from chi-squared confidence band envelopes. Examples of this aggregation over time shown in the charts include a cumulative sum and moving average. A cumulative sum means that at each point in time we show the sum of all NetVar’s over time up until that point. A 24-hour moving average means that at each point in time we show the average of all NetVar’s in the past 24 hours. The purpose of aggregating is to smooth out the random behavior while compounding effects due to coherent consciousness to a statistically significant result.
about
imageWithGraphicLine

RNG TECHNOLOGY

Our Random Number Generators (RNG) produce random numbers every second through a string of 1s and 0s.

Quantum random numbers are extracted from the random behavior of electrons in the circuit, via the quantum avalanche process across a Zener diode.

The numbers are filtered to further clean out any other influences or biases coming from environmental effects such as the temperature of the room, or electromagnetic influences or changes to the input power from the local power grid. That way, when we do see non-random effects, we attribute them to unconventional causes like collective consciousness.

about

An RNG is a device that generates random numbers. In a sense, it is like an electronic coin flipper, only instead of generating Heads or Tails, it generates a 0 or 1 bit. These bits are generated at high speed and combined to produce many random numbers every second.

Inside the RNGs are electronic circuits with several zener diodes, which are semiconductors that typically only allow electric currents to flow in one direction. The diodes are reverse biased, which means that a voltage is applied that pushes electrons opposite to the diode’s preferred direction. The electrons pile up against the diode’s barrier until there are enough to set off a quantum avalanche breakdown. Think of it like snow on a mountain that piles up until even a very small disturbance sets off a cascading avalanche of snow. In the RNG, when enough electrons pile up, a few will randomly cross the barrier, knocking other electrons into motion until the entire pile of electrons has crossed. Then, the build-up of electrons restarts until another avalanche is triggered. This process is very fast; more than one million avalanches occur per second.

The avalanches are not identical. Some pile up more electrons than others, just like snow may pile up in different amounts before an avalanche is triggered. In the case of electrons, there are several factors that determine when the avalanche is set off, and one of these is quantum randomness. This is truly random, and we exploit this to generate truly random numbers. The size of the electron pile-up can be measured by the voltage they generate, and this voltage is a number that is captured at an aggregate level per second.

The voltage-based numbers are generated by quantum randomness but other factors as well. Those include other sources of randomness as well as bias due to the environment and electronic components, such as electromagnetic fields, temperature, and component aging. The most prominent bias in many analyses is the temperature inside the RNG, which typically follows the ambient temperature around it. As we have learned from the original GCP 1 project (History), another factor influencing the numbers is our collective consciousness.

We need to reduce the effects of non-random sources like temperature enough that the numbers are mostly random with some detectable influence from collective consciousness. This is achieved by whitening the numbers, a well-known process in the RNG community for ensuring true randomness. It is a way of scrambling the numbers further to eliminate non-random effects. There are several approaches to whitening, and these RNGs employ very similar techniques to the ones used in the ORION RNGs from GCP 1, as these have already been proven for consciousness detection. Had we used another approach, there could be a concern of either not enough removal of non-randomness, or else cleaning it out so well that even the effects of collective consciousness are removed.

There are three steps of whitening, each one further randomizing the numbers. These are accepted techniques for improving randomness, and we look at random numbers generated after each of these steps:

  1. The numbers go through a D flip-flop, which generates bits i.e. ones and zero (or more precisely high vs. low in the analog circuits). It continuously generates only a one until there is a rising edge i.e. the voltage number rises from below average to above average. At that point, it flips to generating only a zero until the next rising edge. Then it flops back to ones, and so on. These flip-flops occur frequently and randomly, more than one million times per second.
  2. The bits from a fixed pair of diodes within the devices are combined via XOR against each other. There are several diode pairs in each device. Bits are captured from each diode at a rate of 10,000 per second, or the sampling rate is 0.1 milliseconds (ms). The XOR process is to take the bits from the pair of diodes at each 0.1 ms and check whether they are the same as each other. If yes, the outcome is a zero bit; if not a one. This results in a single stream of 10,000 random bits per second from each diode pair.
  3. The resulting bits are combined with an alternating series of ones and zeros (101010…) via XOR to further randomize them.
  4. After whitening, a random number from 0 to 200 is generated per second by counting the number of one bits in the first 200 out of the 10,000 bits. This is the standard procedure used in past experiments like GCP 1. Counting across 200 bits helps to normalize and smooth the data. Keeping a gap of 9,800 unused bits between the ones we count ensures the random numbers are not related to each other by being adjacent in time.

There are a number of randomness tests out there, and we perform several of them. Each device is subjected to a rigorous testing process based on more than one million random numbers. Some tests are performed automatically by the system on a regular basis while researchers perform others manually. Following are some examples:

  • A suite of Z score tests that the numbers follow the expected mean and standard deviation as a normal distribution (“bell curve”)
  • Wald-Wolfowitz runs tests that ensure the different lengths of runs (i.e. how many consecutive numbers are all above average or all below) are random
  • Detect and remove outliers due to very occasional blips in the machinery, identified by being many standard deviations from the mean
  • Measure daily means and standard deviations of each RNG to ensure they are not too many standard deviations from their expected value
about

MEET THE TEAM

Championing humanity through science since 1991

HMI Leadership

Image
Sara Childre

President and CEO

Image
Katherine Floriano

Vice President of Advancement of Major and Planned Gifts

Image
Brian Kabaker

Chief Financial Officer and Director of Sales

Image
Rollin McCraty, Ph.D.

Executive Vice President and Director of Research

Image
Sara Childre

President and CEO

Image
Katherine Floriano

Vice President of Advancement of Major and Planned Gifts

Image
Brian Kabaker

Chief Financial Officer and Director of Sales

Image
Rollin McCraty, Ph.D.

Executive Vice President and Director of Research

Image
Sara Childre

President and CEO

Image
Katherine Floriano

Vice President of Advancement of Major and Planned Gifts

Image
Brian Kabaker

Chief Financial Officer and Director of Sales

Image
Rollin McCraty, Ph.D.

Executive Vice President and Director of Research

about

GCP 2.0 Research Team

Image
Mike Atkinson

Research center laboratory manager

Image
Chris Cockrum

Mathematician and Embedded systems engineer

Image
Scott Davies

Computer science and data analytics

Image
Annette Deyhle, Ph.D.

Research coordinator

Image
Alex Gomez-Marin, Ph.D.

Associate Professor

Image
Ulf Holmberg, Ph.D.

Senior analyst and Functional lead

Image
Roger Nelson, Ph.D.

Founder and Director

Image
Nachum Plonka, Ph.D.

Principal data scientist

Image
Dean Radin, Ph.D.

Chief scientist

Image
Jeff Shafe

Webmaster

Image
Avanti Shrikumar, Ph.D.

Parapsychology and Novel statistical analyses

Image
Claudia Welss

Lead Citizen Scientist

Image
Mike Atkinson

Research center laboratory manager

Image
Chris Cockrum

Mathematician and Embedded systems engineer

Image
Scott Davies

Computer science and data analytics

Image
Annette Deyhle, Ph.D.

Research coordinator

Image
Alex Gomez-Marin, Ph.D.

Associate Professor

Image
Ulf Holmberg, Ph.D.

Senior analyst and Functional lead

Image
Roger Nelson, Ph.D.

Founder and Director

Image
Nachum Plonka, Ph.D.

Principal data scientist

Image
Dean Radin, Ph.D.

Chief scientist

Image
Jeff Shafe

Webmaster

Image
Avanti Shrikumar, Ph.D.

Parapsychology and Novel statistical analyses

Image
Claudia Welss

Lead Citizen Scientist

Image
Mike Atkinson

Research center laboratory manager

Image
Chris Cockrum

Mathematician and Embedded systems engineer

Image
Scott Davies

Computer science and data analytics

Image
Annette Deyhle, Ph.D.

Research coordinator

Image
Alex Gomez-Marin, Ph.D.

Associate Professor

Image
Ulf Holmberg, Ph.D.

Senior analyst and Functional lead

Image
Roger Nelson, Ph.D.

Founder and Director

Image
Nachum Plonka, Ph.D.

Principal data scientist

Image
Dean Radin, Ph.D.

Chief scientist

Image
Jeff Shafe

Webmaster

Image
Avanti Shrikumar, Ph.D.

Parapsychology and Novel statistical analyses

Image
Claudia Welss

Lead Citizen Scientist

about