Why is matter-antimatter asymmetry surprising, if asymmetry can be generated by a random walk in which particles go into black holes?What is Hawking radiation and how does it cause a black hole to evaporate?Matter-antimatter asymmetry problemDecay of matterIf hawking radiation is true why isn't the entire universe glowing?Anti-matter black holes radiationHawking Radiation: Attraction of negative mass particles?Using Hawking radiation to turn matter into antimatter and vice-versa

Can not believing a false statement qualify as knowledge?

Who started calling the matrix multiplication "multiplication"?

An Ailing Assassin

Sleep for 1000 years

For every Framework new Python Env?

Short story: Man gains X-ray vision, cheats at cards, sees a clot in his blood

How can I know if two distributions have the same mean and standard deviation?

Acid-catalysed isomerisation of phenyl epoxide to aldehyde

If a problem is in P solved via dynamic programming, is it also in NP?

How do I force `sudo` to ask for a password each time when a specific command is used?

Why are Marx Generators built like that?

Can the President be impeached twice?

Did a man complain that his Pontiac wouldn't start whenever the ice cream he picked up was vanilla?

Why is wired Ethernet losing its speed advantage over wireless?

What is a word for "atom or molecule"?

Could the barycenter orbit of our sun be greatly underestimated?

How can I force a bank to close my account with them?

I keep rewriting the same section of my story. How do I move forward?

Is it possible to kill parasitic worms by intoxicating oneself?

Kids traveling with a different passport in theirs parents' country without being previously registred in a consulate

Was a four year-old forced to sleep on the floor of Leeds General Infirmary?

Is it possible for nature to create bubble wrap?

How to prevent discontent among the players when one player murders the others' characters?

What are the disadvantages of using a Zener diode over a linear voltage regulator?



Why is matter-antimatter asymmetry surprising, if asymmetry can be generated by a random walk in which particles go into black holes?


What is Hawking radiation and how does it cause a black hole to evaporate?Matter-antimatter asymmetry problemDecay of matterIf hawking radiation is true why isn't the entire universe glowing?Anti-matter black holes radiationHawking Radiation: Attraction of negative mass particles?Using Hawking radiation to turn matter into antimatter and vice-versa






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









49















$begingroup$


My understanding is the early universe was a very "hot" (ie energy dense) environment. It was even hot enough for black holes to form from photons.



My second point of understanding is that black holes can lose mass due to hawking radiation, which amounts to:




Physical insight into the process may be gained by imagining that
particle–antiparticle radiation is emitted from just beyond the event
horizon. This radiation does not come directly from the black hole
itself, but rather is a result of virtual particles being "boosted" by
the black hole's gravitation into becoming real particles.[citation
needed] As the particle–antiparticle pair was produced by the black
hole's gravitational energy, the escape of one of the particles lowers
the mass of the black hole.3



An alternative view of the process is that vacuum fluctuations cause a
particle–antiparticle pair to appear close to the event horizon of a
black hole. One of the pair falls into the black hole while the other
escapes. In order to preserve total energy, the particle that fell
into the black hole must have had a negative energy (with respect to
an observer far away from the black hole). This causes the black hole
to lose mass, and, to an outside observer, it would appear that the
black hole has just emitted a particle. In another model, the process
is a quantum tunnelling effect, whereby particle–antiparticle pairs
will form from the vacuum, and one will tunnel outside the event
horizon.




So I simulated a scenario with two types of particles that are created in a 50/50 ratio from hawking radiation, and always annihilate each other if possible.




Edit:



In this simulation both particles are created, but one gets sucked
into the black hole. The other stays outside. So the charge should be
conserved.




The simulation (written in R) is here:



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 1e6
res = matrix(ncol = 2, nrow = n_steps)

# Initiate number of particles to zero
n0 = n1 = 0
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1)

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
#Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1000 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Here is a snapshot of the results, where the black line is the number of "type 0" particles and the red line is the number of "type 1" particles:
enter image description here



Obviously this is a simplified 1d model where any generated anti-matter is immediately annihilated by a corresponding particle of matter, etc. However, I do not see why the qualitative result of a dominant particle "species" would not be expected to hold in general. So what is the basis for expecting equal amounts of matter and antimatter? How is it in conflict with this simple simulation?



EDIT:



As requested in the comments I modified the simulation to allow different initial number of particles and the probability of generating each particle.



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

# Initial number of each type of particle and probability of generating type 0
n0 = 0
n1 = 0
p0 = 0.51
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Some examples:



n0 = 1000, n1 = 0, p = 0.5
enter image description here



n0 = 0, n1 = 0, p = 0.51
enter image description here



n0 = 1000, n1 = 1000, p = 0.5
enter image description here



EDIT 2:



Thanks all for your answers and comments. I learned the name for the process of generating matter from black holes is "black hole baryogenesis". However, in the papers I checked on this topic (eg Nagatani 1998, Majumdar et al 1994) do not seem to be talking about the same thing I am.



I am saying that via the dynamics of symmetric generation and annihilation of matter-antimatter along with symmetric baryogenesis via hawking radiation you will always get an imbalance over time that will tend to grow due to a positive feedback. Ie, the Sakharov conditions such as CP-violation are not actually required to get an asymmetry.



If you accept pair-production, annihilation, and hawking radiation exists, then you should by default expect one dominant species of particle to dominate over the other at all times. That is the only stable state (besides an energy-only universe). Approximately equal matter/antimatter is quite obviously very unstable because they annihilate each other, so it makes no sense to expect that.



It is possible that in some more complicated model (including more than one type of particle-pair, distance between particles, forces, etc) somehow this tendency towards asymmetry would be somehow canceled out. But I cannot think of any reason why that would be, it should be up to the people who expect matter-antimatter symmetry to come up with a mechanism to explain that (which would be an odd thing to spend your time on since that is decidedly not what we observe in our universe).



Regarding some specific issues people had:



1) Concerns about negative charge accumulating in the black holes and positive charge accumulating in the regular space



  • While in the simulation there is only one particle, in practice this would be happening in parallel for electron-positrons and proton-antiproton pairs at (afaik) equal rates. So I would not expect any kind of charge imbalance. You can imagine particle pairs in the simulation are half electron-positrons and half proton-antiprotons.

2) There were not enough black holes in the early universe to explain the asymmetry



  • I tried and failed to get an exact quote for this so I could figure out what assumptions were made, but I doubt they included the positive feedback shown by the simulation in their analysis. Also, I wondered if they considered the possibility of kugelblitz black holes forming in an energy-only universe. Finally, the tendency towards a dominant species is ongoing all the time, it need not to have happened in the early universe anyway.

3) If this process is ongoing in a universe that looks like ours today (where it may take a long time for a particle to travel from one black hole to the other), we would expect some black holes to locally happen to generate antimatter dominated regions and others to generate matter dominated regions. Eventually some of these regions should come into contact with each other leading to an observable mass annihilation of particles.



  • I agree this would be the default expectation, but If you start from a highly matter-dominated state it would be very unlikely for enough antimatter to be generated to locally annihilate all the matter and even then there is only a 50% chance the next phase is antimatter. Putting numbers on stuff like this would require a more complex model that I don't wish to attempt here.

4) Asymmetry is not actually considered surprising by physicists.



  • Well, it says this on wikipedia:


    Neither the standard model of particle physics, nor the theory of
    general relativity provides a known explanation for why this should be
    so, and it is a natural assumption that the universe be neutral with
    all conserved charges. [...] As remarked in a 2012 research paper,
    "The origin of matter remains one of the great mysteries in physics."




5) This process is somehow an exotic "alternative" theory to the standard.



  • This process was deduced by accepting standard physics/cosmology to be correct. It is a straightforward consequence of the interplay between pair production/annihilation and hawking radiation. It may seem counterintuitive to people used to thinking about what we would expect on average from a model, when actually we want to think about how the individual instances behave. If the simulation is run multiple times and add up all the "particles" the result will be ~50/50 matter/antimatter. However, we observe one particular universe not an average of all possible universes. In each particular instance there is always a dominating species of particle, which we end up calling "matter".

So, after reading the answers/comments I think the answer to my question is probably that physicists were thinking of what they would expect on average when they should have been thinking about what would happen in specific instances. But I'm not familiar enough with the literature to say.



Edit 3:



After talking with Chris in the chat I decided to make the rate of annihilation dependent on the number of particles in the universe. I did this by setting the probability of annihilation to exp(-100/n_part), where n_part is the number of particles. This was pretty arbitrary, I chose it to have decent coverage over the whole typical range for 250k steps. It looks like this:
enter image description here



Here is the code (I also added some parallelization, sorry for the increased complexity):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Run the simulation for 250k steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

n_part = sum(res[i -1, ]) + 1
p_ann = exp(-100/n_part)
flag = sample(0:1, 1, prob = c(1 - p_ann, p_ann))


# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0 & flag)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0 & flag)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



Here is an example of 25 results:
enter image description here



And a histogram of the percent of particles that were in the minor class by the end of each simulation:
enter image description here



So the results still agree with the simpler model in that such systems will tend to have a dominant species of particle.



Edit 4:



After further helpful conversation with chris he suggested that annihilation of more than one particle pair per step was the crucial added factor. Specifically that the number of removed particles should be a sample from the Poisson distribution with a mean proportional to the total number of particles, ie rpois(1, m*n0*n1) where m is small enough so that annihilation are very rare until a large number of matter and antimatter particles exist.



Here is the code (which is quite different from earlier):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5
m = 10^-4

# Run the simulation for 250k steps and
n_steps = 250e3

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Initialize output matrix
res = matrix(ncol = 3, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
n0 = n0 + 1


# If "x" is an antimatter particle then...
if(x == 1)
n1 = n1 + 1


# Delete number of particles proportional to the product of n0*n1
n_del = rpois(1, m*n0*n1)
n0 = max(0, n0 - n_del)
n1 = max(0, n1 - n_del)

# Save the results and plot them if "i" is a multiple of 1000
res[i, 1:2] = c(n0, n1)
res[i, 3] = min(res[i, 1:2])/sum(res[i, 1:2])
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



And here are the results for various values of "m" (which controls how often annihilation occurs). This plot shows the average proportion of minor particles for each step (using 100 simulations per value of m) as the blue line, the green line is the median, and the bands are +/- 1 sd from the mean:



enter image description here



The first plot has the same behavior as my simulations, and you can see that as m gets smaller (annihilation rate as a function of number of particles becomes rarer) the system tends to stay in a more symmetric state (50/50 matter/antimatter), at least for more steps.



So a key assumption made by physicists seems to be that the annihilation rate in the early universe was very low, so that enough particles could accumulate until they became common enough that neither is likely to ever get totally "wiped out".



EDIT 5:



I ran one of those Poisson simulations for 8 million steps with m = 10^-6 and you can see that it just takes longer for the dominance to play out (it looks slightly different because the 1 sigma fill wouldn't plot with so many data points):
enter image description here



So from that I conclude the very low annihilation rates just delay how long it takes, rather than resulting in a fundamentally different outcome.



Edit 6:



Same thing happens with m = 10^-7 and 28 million steps. The aggregate chart looks the same as the above m = 10^-6 with 8 million steps. So here are some individual examples. You can see a clear trend towards a dominating species just as in the original model:
enter image description here



Edit 7:



To wrap this up... I think the answer to the question ("why do physicists think this?") is clear from my conversation with Chris here. Chris does not seem interested in making that into an answer but I will accept it if someone writes similar.










share|cite|improve this question











$endgroup$














  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – Chris
    Oct 2 at 21:08






  • 1




    $begingroup$
    This example uses one type of particle/antiparticle. If it is charged, proton/antiproton, (which is your argument) then charge conservation and baryon number conservation are the same, which makes it is not physical If you look at the literature of "black hole baryogenesis" , extra assumptions are needed, model ( GUTS or somehting else) dependent for the models to be predictive, not just random walks.
    $endgroup$
    – anna v
    Oct 3 at 3:50







  • 1




    $begingroup$
    The above is true for a single black hole . There is no mechanism that ensures that for a statistical distribution of black holes it will be the same particle that will dominate the black hole. Statistics for two particles would say that it would be 50/50 , unless a model is used, and that is why the models are necessary.
    $endgroup$
    – anna v
    Oct 3 at 3:59










  • $begingroup$
    @annav Basically I am trying to keep it simple because I think you should need to come up with a mechanism that prevents a dominance between matter and anti-matter. It would not be difficult to add another set of particle pairs to this model, but I do not want to at this time. There are many other details of the calculation that are inconsistent with our universe already. The resuls are only qualitatively meaningful.
    $endgroup$
    – Livid
    Oct 3 at 4:32











  • $begingroup$
    I am trying to say that random walk around one black hole is not enough, as with many black holes both protons and antiprotons may be absorbed with a 50/50 chance, and that a physics model is needed that will introduce baryons and lead to baryon number violation
    $endgroup$
    – anna v
    Oct 3 at 4:49


















49















$begingroup$


My understanding is the early universe was a very "hot" (ie energy dense) environment. It was even hot enough for black holes to form from photons.



My second point of understanding is that black holes can lose mass due to hawking radiation, which amounts to:




Physical insight into the process may be gained by imagining that
particle–antiparticle radiation is emitted from just beyond the event
horizon. This radiation does not come directly from the black hole
itself, but rather is a result of virtual particles being "boosted" by
the black hole's gravitation into becoming real particles.[citation
needed] As the particle–antiparticle pair was produced by the black
hole's gravitational energy, the escape of one of the particles lowers
the mass of the black hole.3



An alternative view of the process is that vacuum fluctuations cause a
particle–antiparticle pair to appear close to the event horizon of a
black hole. One of the pair falls into the black hole while the other
escapes. In order to preserve total energy, the particle that fell
into the black hole must have had a negative energy (with respect to
an observer far away from the black hole). This causes the black hole
to lose mass, and, to an outside observer, it would appear that the
black hole has just emitted a particle. In another model, the process
is a quantum tunnelling effect, whereby particle–antiparticle pairs
will form from the vacuum, and one will tunnel outside the event
horizon.




So I simulated a scenario with two types of particles that are created in a 50/50 ratio from hawking radiation, and always annihilate each other if possible.




Edit:



In this simulation both particles are created, but one gets sucked
into the black hole. The other stays outside. So the charge should be
conserved.




The simulation (written in R) is here:



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 1e6
res = matrix(ncol = 2, nrow = n_steps)

# Initiate number of particles to zero
n0 = n1 = 0
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1)

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
#Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1000 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Here is a snapshot of the results, where the black line is the number of "type 0" particles and the red line is the number of "type 1" particles:
enter image description here



Obviously this is a simplified 1d model where any generated anti-matter is immediately annihilated by a corresponding particle of matter, etc. However, I do not see why the qualitative result of a dominant particle "species" would not be expected to hold in general. So what is the basis for expecting equal amounts of matter and antimatter? How is it in conflict with this simple simulation?



EDIT:



As requested in the comments I modified the simulation to allow different initial number of particles and the probability of generating each particle.



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

# Initial number of each type of particle and probability of generating type 0
n0 = 0
n1 = 0
p0 = 0.51
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Some examples:



n0 = 1000, n1 = 0, p = 0.5
enter image description here



n0 = 0, n1 = 0, p = 0.51
enter image description here



n0 = 1000, n1 = 1000, p = 0.5
enter image description here



EDIT 2:



Thanks all for your answers and comments. I learned the name for the process of generating matter from black holes is "black hole baryogenesis". However, in the papers I checked on this topic (eg Nagatani 1998, Majumdar et al 1994) do not seem to be talking about the same thing I am.



I am saying that via the dynamics of symmetric generation and annihilation of matter-antimatter along with symmetric baryogenesis via hawking radiation you will always get an imbalance over time that will tend to grow due to a positive feedback. Ie, the Sakharov conditions such as CP-violation are not actually required to get an asymmetry.



If you accept pair-production, annihilation, and hawking radiation exists, then you should by default expect one dominant species of particle to dominate over the other at all times. That is the only stable state (besides an energy-only universe). Approximately equal matter/antimatter is quite obviously very unstable because they annihilate each other, so it makes no sense to expect that.



It is possible that in some more complicated model (including more than one type of particle-pair, distance between particles, forces, etc) somehow this tendency towards asymmetry would be somehow canceled out. But I cannot think of any reason why that would be, it should be up to the people who expect matter-antimatter symmetry to come up with a mechanism to explain that (which would be an odd thing to spend your time on since that is decidedly not what we observe in our universe).



Regarding some specific issues people had:



1) Concerns about negative charge accumulating in the black holes and positive charge accumulating in the regular space



  • While in the simulation there is only one particle, in practice this would be happening in parallel for electron-positrons and proton-antiproton pairs at (afaik) equal rates. So I would not expect any kind of charge imbalance. You can imagine particle pairs in the simulation are half electron-positrons and half proton-antiprotons.

2) There were not enough black holes in the early universe to explain the asymmetry



  • I tried and failed to get an exact quote for this so I could figure out what assumptions were made, but I doubt they included the positive feedback shown by the simulation in their analysis. Also, I wondered if they considered the possibility of kugelblitz black holes forming in an energy-only universe. Finally, the tendency towards a dominant species is ongoing all the time, it need not to have happened in the early universe anyway.

3) If this process is ongoing in a universe that looks like ours today (where it may take a long time for a particle to travel from one black hole to the other), we would expect some black holes to locally happen to generate antimatter dominated regions and others to generate matter dominated regions. Eventually some of these regions should come into contact with each other leading to an observable mass annihilation of particles.



  • I agree this would be the default expectation, but If you start from a highly matter-dominated state it would be very unlikely for enough antimatter to be generated to locally annihilate all the matter and even then there is only a 50% chance the next phase is antimatter. Putting numbers on stuff like this would require a more complex model that I don't wish to attempt here.

4) Asymmetry is not actually considered surprising by physicists.



  • Well, it says this on wikipedia:


    Neither the standard model of particle physics, nor the theory of
    general relativity provides a known explanation for why this should be
    so, and it is a natural assumption that the universe be neutral with
    all conserved charges. [...] As remarked in a 2012 research paper,
    "The origin of matter remains one of the great mysteries in physics."




5) This process is somehow an exotic "alternative" theory to the standard.



  • This process was deduced by accepting standard physics/cosmology to be correct. It is a straightforward consequence of the interplay between pair production/annihilation and hawking radiation. It may seem counterintuitive to people used to thinking about what we would expect on average from a model, when actually we want to think about how the individual instances behave. If the simulation is run multiple times and add up all the "particles" the result will be ~50/50 matter/antimatter. However, we observe one particular universe not an average of all possible universes. In each particular instance there is always a dominating species of particle, which we end up calling "matter".

So, after reading the answers/comments I think the answer to my question is probably that physicists were thinking of what they would expect on average when they should have been thinking about what would happen in specific instances. But I'm not familiar enough with the literature to say.



Edit 3:



After talking with Chris in the chat I decided to make the rate of annihilation dependent on the number of particles in the universe. I did this by setting the probability of annihilation to exp(-100/n_part), where n_part is the number of particles. This was pretty arbitrary, I chose it to have decent coverage over the whole typical range for 250k steps. It looks like this:
enter image description here



Here is the code (I also added some parallelization, sorry for the increased complexity):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Run the simulation for 250k steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

n_part = sum(res[i -1, ]) + 1
p_ann = exp(-100/n_part)
flag = sample(0:1, 1, prob = c(1 - p_ann, p_ann))


# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0 & flag)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0 & flag)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



Here is an example of 25 results:
enter image description here



And a histogram of the percent of particles that were in the minor class by the end of each simulation:
enter image description here



So the results still agree with the simpler model in that such systems will tend to have a dominant species of particle.



Edit 4:



After further helpful conversation with chris he suggested that annihilation of more than one particle pair per step was the crucial added factor. Specifically that the number of removed particles should be a sample from the Poisson distribution with a mean proportional to the total number of particles, ie rpois(1, m*n0*n1) where m is small enough so that annihilation are very rare until a large number of matter and antimatter particles exist.



Here is the code (which is quite different from earlier):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5
m = 10^-4

# Run the simulation for 250k steps and
n_steps = 250e3

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Initialize output matrix
res = matrix(ncol = 3, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
n0 = n0 + 1


# If "x" is an antimatter particle then...
if(x == 1)
n1 = n1 + 1


# Delete number of particles proportional to the product of n0*n1
n_del = rpois(1, m*n0*n1)
n0 = max(0, n0 - n_del)
n1 = max(0, n1 - n_del)

# Save the results and plot them if "i" is a multiple of 1000
res[i, 1:2] = c(n0, n1)
res[i, 3] = min(res[i, 1:2])/sum(res[i, 1:2])
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



And here are the results for various values of "m" (which controls how often annihilation occurs). This plot shows the average proportion of minor particles for each step (using 100 simulations per value of m) as the blue line, the green line is the median, and the bands are +/- 1 sd from the mean:



enter image description here



The first plot has the same behavior as my simulations, and you can see that as m gets smaller (annihilation rate as a function of number of particles becomes rarer) the system tends to stay in a more symmetric state (50/50 matter/antimatter), at least for more steps.



So a key assumption made by physicists seems to be that the annihilation rate in the early universe was very low, so that enough particles could accumulate until they became common enough that neither is likely to ever get totally "wiped out".



EDIT 5:



I ran one of those Poisson simulations for 8 million steps with m = 10^-6 and you can see that it just takes longer for the dominance to play out (it looks slightly different because the 1 sigma fill wouldn't plot with so many data points):
enter image description here



So from that I conclude the very low annihilation rates just delay how long it takes, rather than resulting in a fundamentally different outcome.



Edit 6:



Same thing happens with m = 10^-7 and 28 million steps. The aggregate chart looks the same as the above m = 10^-6 with 8 million steps. So here are some individual examples. You can see a clear trend towards a dominating species just as in the original model:
enter image description here



Edit 7:



To wrap this up... I think the answer to the question ("why do physicists think this?") is clear from my conversation with Chris here. Chris does not seem interested in making that into an answer but I will accept it if someone writes similar.










share|cite|improve this question











$endgroup$














  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – Chris
    Oct 2 at 21:08






  • 1




    $begingroup$
    This example uses one type of particle/antiparticle. If it is charged, proton/antiproton, (which is your argument) then charge conservation and baryon number conservation are the same, which makes it is not physical If you look at the literature of "black hole baryogenesis" , extra assumptions are needed, model ( GUTS or somehting else) dependent for the models to be predictive, not just random walks.
    $endgroup$
    – anna v
    Oct 3 at 3:50







  • 1




    $begingroup$
    The above is true for a single black hole . There is no mechanism that ensures that for a statistical distribution of black holes it will be the same particle that will dominate the black hole. Statistics for two particles would say that it would be 50/50 , unless a model is used, and that is why the models are necessary.
    $endgroup$
    – anna v
    Oct 3 at 3:59










  • $begingroup$
    @annav Basically I am trying to keep it simple because I think you should need to come up with a mechanism that prevents a dominance between matter and anti-matter. It would not be difficult to add another set of particle pairs to this model, but I do not want to at this time. There are many other details of the calculation that are inconsistent with our universe already. The resuls are only qualitatively meaningful.
    $endgroup$
    – Livid
    Oct 3 at 4:32











  • $begingroup$
    I am trying to say that random walk around one black hole is not enough, as with many black holes both protons and antiprotons may be absorbed with a 50/50 chance, and that a physics model is needed that will introduce baryons and lead to baryon number violation
    $endgroup$
    – anna v
    Oct 3 at 4:49














49













49









49


15



$begingroup$


My understanding is the early universe was a very "hot" (ie energy dense) environment. It was even hot enough for black holes to form from photons.



My second point of understanding is that black holes can lose mass due to hawking radiation, which amounts to:




Physical insight into the process may be gained by imagining that
particle–antiparticle radiation is emitted from just beyond the event
horizon. This radiation does not come directly from the black hole
itself, but rather is a result of virtual particles being "boosted" by
the black hole's gravitation into becoming real particles.[citation
needed] As the particle–antiparticle pair was produced by the black
hole's gravitational energy, the escape of one of the particles lowers
the mass of the black hole.3



An alternative view of the process is that vacuum fluctuations cause a
particle–antiparticle pair to appear close to the event horizon of a
black hole. One of the pair falls into the black hole while the other
escapes. In order to preserve total energy, the particle that fell
into the black hole must have had a negative energy (with respect to
an observer far away from the black hole). This causes the black hole
to lose mass, and, to an outside observer, it would appear that the
black hole has just emitted a particle. In another model, the process
is a quantum tunnelling effect, whereby particle–antiparticle pairs
will form from the vacuum, and one will tunnel outside the event
horizon.




So I simulated a scenario with two types of particles that are created in a 50/50 ratio from hawking radiation, and always annihilate each other if possible.




Edit:



In this simulation both particles are created, but one gets sucked
into the black hole. The other stays outside. So the charge should be
conserved.




The simulation (written in R) is here:



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 1e6
res = matrix(ncol = 2, nrow = n_steps)

# Initiate number of particles to zero
n0 = n1 = 0
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1)

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
#Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1000 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Here is a snapshot of the results, where the black line is the number of "type 0" particles and the red line is the number of "type 1" particles:
enter image description here



Obviously this is a simplified 1d model where any generated anti-matter is immediately annihilated by a corresponding particle of matter, etc. However, I do not see why the qualitative result of a dominant particle "species" would not be expected to hold in general. So what is the basis for expecting equal amounts of matter and antimatter? How is it in conflict with this simple simulation?



EDIT:



As requested in the comments I modified the simulation to allow different initial number of particles and the probability of generating each particle.



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

# Initial number of each type of particle and probability of generating type 0
n0 = 0
n1 = 0
p0 = 0.51
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Some examples:



n0 = 1000, n1 = 0, p = 0.5
enter image description here



n0 = 0, n1 = 0, p = 0.51
enter image description here



n0 = 1000, n1 = 1000, p = 0.5
enter image description here



EDIT 2:



Thanks all for your answers and comments. I learned the name for the process of generating matter from black holes is "black hole baryogenesis". However, in the papers I checked on this topic (eg Nagatani 1998, Majumdar et al 1994) do not seem to be talking about the same thing I am.



I am saying that via the dynamics of symmetric generation and annihilation of matter-antimatter along with symmetric baryogenesis via hawking radiation you will always get an imbalance over time that will tend to grow due to a positive feedback. Ie, the Sakharov conditions such as CP-violation are not actually required to get an asymmetry.



If you accept pair-production, annihilation, and hawking radiation exists, then you should by default expect one dominant species of particle to dominate over the other at all times. That is the only stable state (besides an energy-only universe). Approximately equal matter/antimatter is quite obviously very unstable because they annihilate each other, so it makes no sense to expect that.



It is possible that in some more complicated model (including more than one type of particle-pair, distance between particles, forces, etc) somehow this tendency towards asymmetry would be somehow canceled out. But I cannot think of any reason why that would be, it should be up to the people who expect matter-antimatter symmetry to come up with a mechanism to explain that (which would be an odd thing to spend your time on since that is decidedly not what we observe in our universe).



Regarding some specific issues people had:



1) Concerns about negative charge accumulating in the black holes and positive charge accumulating in the regular space



  • While in the simulation there is only one particle, in practice this would be happening in parallel for electron-positrons and proton-antiproton pairs at (afaik) equal rates. So I would not expect any kind of charge imbalance. You can imagine particle pairs in the simulation are half electron-positrons and half proton-antiprotons.

2) There were not enough black holes in the early universe to explain the asymmetry



  • I tried and failed to get an exact quote for this so I could figure out what assumptions were made, but I doubt they included the positive feedback shown by the simulation in their analysis. Also, I wondered if they considered the possibility of kugelblitz black holes forming in an energy-only universe. Finally, the tendency towards a dominant species is ongoing all the time, it need not to have happened in the early universe anyway.

3) If this process is ongoing in a universe that looks like ours today (where it may take a long time for a particle to travel from one black hole to the other), we would expect some black holes to locally happen to generate antimatter dominated regions and others to generate matter dominated regions. Eventually some of these regions should come into contact with each other leading to an observable mass annihilation of particles.



  • I agree this would be the default expectation, but If you start from a highly matter-dominated state it would be very unlikely for enough antimatter to be generated to locally annihilate all the matter and even then there is only a 50% chance the next phase is antimatter. Putting numbers on stuff like this would require a more complex model that I don't wish to attempt here.

4) Asymmetry is not actually considered surprising by physicists.



  • Well, it says this on wikipedia:


    Neither the standard model of particle physics, nor the theory of
    general relativity provides a known explanation for why this should be
    so, and it is a natural assumption that the universe be neutral with
    all conserved charges. [...] As remarked in a 2012 research paper,
    "The origin of matter remains one of the great mysteries in physics."




5) This process is somehow an exotic "alternative" theory to the standard.



  • This process was deduced by accepting standard physics/cosmology to be correct. It is a straightforward consequence of the interplay between pair production/annihilation and hawking radiation. It may seem counterintuitive to people used to thinking about what we would expect on average from a model, when actually we want to think about how the individual instances behave. If the simulation is run multiple times and add up all the "particles" the result will be ~50/50 matter/antimatter. However, we observe one particular universe not an average of all possible universes. In each particular instance there is always a dominating species of particle, which we end up calling "matter".

So, after reading the answers/comments I think the answer to my question is probably that physicists were thinking of what they would expect on average when they should have been thinking about what would happen in specific instances. But I'm not familiar enough with the literature to say.



Edit 3:



After talking with Chris in the chat I decided to make the rate of annihilation dependent on the number of particles in the universe. I did this by setting the probability of annihilation to exp(-100/n_part), where n_part is the number of particles. This was pretty arbitrary, I chose it to have decent coverage over the whole typical range for 250k steps. It looks like this:
enter image description here



Here is the code (I also added some parallelization, sorry for the increased complexity):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Run the simulation for 250k steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

n_part = sum(res[i -1, ]) + 1
p_ann = exp(-100/n_part)
flag = sample(0:1, 1, prob = c(1 - p_ann, p_ann))


# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0 & flag)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0 & flag)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



Here is an example of 25 results:
enter image description here



And a histogram of the percent of particles that were in the minor class by the end of each simulation:
enter image description here



So the results still agree with the simpler model in that such systems will tend to have a dominant species of particle.



Edit 4:



After further helpful conversation with chris he suggested that annihilation of more than one particle pair per step was the crucial added factor. Specifically that the number of removed particles should be a sample from the Poisson distribution with a mean proportional to the total number of particles, ie rpois(1, m*n0*n1) where m is small enough so that annihilation are very rare until a large number of matter and antimatter particles exist.



Here is the code (which is quite different from earlier):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5
m = 10^-4

# Run the simulation for 250k steps and
n_steps = 250e3

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Initialize output matrix
res = matrix(ncol = 3, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
n0 = n0 + 1


# If "x" is an antimatter particle then...
if(x == 1)
n1 = n1 + 1


# Delete number of particles proportional to the product of n0*n1
n_del = rpois(1, m*n0*n1)
n0 = max(0, n0 - n_del)
n1 = max(0, n1 - n_del)

# Save the results and plot them if "i" is a multiple of 1000
res[i, 1:2] = c(n0, n1)
res[i, 3] = min(res[i, 1:2])/sum(res[i, 1:2])
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



And here are the results for various values of "m" (which controls how often annihilation occurs). This plot shows the average proportion of minor particles for each step (using 100 simulations per value of m) as the blue line, the green line is the median, and the bands are +/- 1 sd from the mean:



enter image description here



The first plot has the same behavior as my simulations, and you can see that as m gets smaller (annihilation rate as a function of number of particles becomes rarer) the system tends to stay in a more symmetric state (50/50 matter/antimatter), at least for more steps.



So a key assumption made by physicists seems to be that the annihilation rate in the early universe was very low, so that enough particles could accumulate until they became common enough that neither is likely to ever get totally "wiped out".



EDIT 5:



I ran one of those Poisson simulations for 8 million steps with m = 10^-6 and you can see that it just takes longer for the dominance to play out (it looks slightly different because the 1 sigma fill wouldn't plot with so many data points):
enter image description here



So from that I conclude the very low annihilation rates just delay how long it takes, rather than resulting in a fundamentally different outcome.



Edit 6:



Same thing happens with m = 10^-7 and 28 million steps. The aggregate chart looks the same as the above m = 10^-6 with 8 million steps. So here are some individual examples. You can see a clear trend towards a dominating species just as in the original model:
enter image description here



Edit 7:



To wrap this up... I think the answer to the question ("why do physicists think this?") is clear from my conversation with Chris here. Chris does not seem interested in making that into an answer but I will accept it if someone writes similar.










share|cite|improve this question











$endgroup$




My understanding is the early universe was a very "hot" (ie energy dense) environment. It was even hot enough for black holes to form from photons.



My second point of understanding is that black holes can lose mass due to hawking radiation, which amounts to:




Physical insight into the process may be gained by imagining that
particle–antiparticle radiation is emitted from just beyond the event
horizon. This radiation does not come directly from the black hole
itself, but rather is a result of virtual particles being "boosted" by
the black hole's gravitation into becoming real particles.[citation
needed] As the particle–antiparticle pair was produced by the black
hole's gravitational energy, the escape of one of the particles lowers
the mass of the black hole.3



An alternative view of the process is that vacuum fluctuations cause a
particle–antiparticle pair to appear close to the event horizon of a
black hole. One of the pair falls into the black hole while the other
escapes. In order to preserve total energy, the particle that fell
into the black hole must have had a negative energy (with respect to
an observer far away from the black hole). This causes the black hole
to lose mass, and, to an outside observer, it would appear that the
black hole has just emitted a particle. In another model, the process
is a quantum tunnelling effect, whereby particle–antiparticle pairs
will form from the vacuum, and one will tunnel outside the event
horizon.




So I simulated a scenario with two types of particles that are created in a 50/50 ratio from hawking radiation, and always annihilate each other if possible.




Edit:



In this simulation both particles are created, but one gets sucked
into the black hole. The other stays outside. So the charge should be
conserved.




The simulation (written in R) is here:



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 1e6
res = matrix(ncol = 2, nrow = n_steps)

# Initiate number of particles to zero
n0 = n1 = 0
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1)

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
#Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1000 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Here is a snapshot of the results, where the black line is the number of "type 0" particles and the red line is the number of "type 1" particles:
enter image description here



Obviously this is a simplified 1d model where any generated anti-matter is immediately annihilated by a corresponding particle of matter, etc. However, I do not see why the qualitative result of a dominant particle "species" would not be expected to hold in general. So what is the basis for expecting equal amounts of matter and antimatter? How is it in conflict with this simple simulation?



EDIT:



As requested in the comments I modified the simulation to allow different initial number of particles and the probability of generating each particle.



# Run the simulation for 1 million steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

# Initial number of each type of particle and probability of generating type 0
n0 = 0
n1 = 0
p0 = 0.51
for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0)
plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
lines(res[1:i, 2], col = "Red", lwd = 3)




Some examples:



n0 = 1000, n1 = 0, p = 0.5
enter image description here



n0 = 0, n1 = 0, p = 0.51
enter image description here



n0 = 1000, n1 = 1000, p = 0.5
enter image description here



EDIT 2:



Thanks all for your answers and comments. I learned the name for the process of generating matter from black holes is "black hole baryogenesis". However, in the papers I checked on this topic (eg Nagatani 1998, Majumdar et al 1994) do not seem to be talking about the same thing I am.



I am saying that via the dynamics of symmetric generation and annihilation of matter-antimatter along with symmetric baryogenesis via hawking radiation you will always get an imbalance over time that will tend to grow due to a positive feedback. Ie, the Sakharov conditions such as CP-violation are not actually required to get an asymmetry.



If you accept pair-production, annihilation, and hawking radiation exists, then you should by default expect one dominant species of particle to dominate over the other at all times. That is the only stable state (besides an energy-only universe). Approximately equal matter/antimatter is quite obviously very unstable because they annihilate each other, so it makes no sense to expect that.



It is possible that in some more complicated model (including more than one type of particle-pair, distance between particles, forces, etc) somehow this tendency towards asymmetry would be somehow canceled out. But I cannot think of any reason why that would be, it should be up to the people who expect matter-antimatter symmetry to come up with a mechanism to explain that (which would be an odd thing to spend your time on since that is decidedly not what we observe in our universe).



Regarding some specific issues people had:



1) Concerns about negative charge accumulating in the black holes and positive charge accumulating in the regular space



  • While in the simulation there is only one particle, in practice this would be happening in parallel for electron-positrons and proton-antiproton pairs at (afaik) equal rates. So I would not expect any kind of charge imbalance. You can imagine particle pairs in the simulation are half electron-positrons and half proton-antiprotons.

2) There were not enough black holes in the early universe to explain the asymmetry



  • I tried and failed to get an exact quote for this so I could figure out what assumptions were made, but I doubt they included the positive feedback shown by the simulation in their analysis. Also, I wondered if they considered the possibility of kugelblitz black holes forming in an energy-only universe. Finally, the tendency towards a dominant species is ongoing all the time, it need not to have happened in the early universe anyway.

3) If this process is ongoing in a universe that looks like ours today (where it may take a long time for a particle to travel from one black hole to the other), we would expect some black holes to locally happen to generate antimatter dominated regions and others to generate matter dominated regions. Eventually some of these regions should come into contact with each other leading to an observable mass annihilation of particles.



  • I agree this would be the default expectation, but If you start from a highly matter-dominated state it would be very unlikely for enough antimatter to be generated to locally annihilate all the matter and even then there is only a 50% chance the next phase is antimatter. Putting numbers on stuff like this would require a more complex model that I don't wish to attempt here.

4) Asymmetry is not actually considered surprising by physicists.



  • Well, it says this on wikipedia:


    Neither the standard model of particle physics, nor the theory of
    general relativity provides a known explanation for why this should be
    so, and it is a natural assumption that the universe be neutral with
    all conserved charges. [...] As remarked in a 2012 research paper,
    "The origin of matter remains one of the great mysteries in physics."




5) This process is somehow an exotic "alternative" theory to the standard.



  • This process was deduced by accepting standard physics/cosmology to be correct. It is a straightforward consequence of the interplay between pair production/annihilation and hawking radiation. It may seem counterintuitive to people used to thinking about what we would expect on average from a model, when actually we want to think about how the individual instances behave. If the simulation is run multiple times and add up all the "particles" the result will be ~50/50 matter/antimatter. However, we observe one particular universe not an average of all possible universes. In each particular instance there is always a dominating species of particle, which we end up calling "matter".

So, after reading the answers/comments I think the answer to my question is probably that physicists were thinking of what they would expect on average when they should have been thinking about what would happen in specific instances. But I'm not familiar enough with the literature to say.



Edit 3:



After talking with Chris in the chat I decided to make the rate of annihilation dependent on the number of particles in the universe. I did this by setting the probability of annihilation to exp(-100/n_part), where n_part is the number of particles. This was pretty arbitrary, I chose it to have decent coverage over the whole typical range for 250k steps. It looks like this:
enter image description here



Here is the code (I also added some parallelization, sorry for the increased complexity):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Run the simulation for 250k steps and initialize output matrix
n_steps = 250e3
res = matrix(ncol = 2, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

n_part = sum(res[i -1, ]) + 1
p_ann = exp(-100/n_part)
flag = sample(0:1, 1, prob = c(1 - p_ann, p_ann))


# If "x" is a matter particle then...
if(x == 0)
# If an antimatter particle exists, then annihilate it with the new matter particle.
# Otherwise increase the number of matter particles by one
if(n1 > 0 & flag)
n1 = n1 - 1
else
n0 = n0 + 1



# If "x" is an antimatter particle then...
if(x == 1)
# If a matter particle exists, then annihilate it with the new antimatter particle.
# Otherwise increase the number of antimatter particles by one
if(n0 > 0 & flag)
n0 = n0 - 1
else
n1 = n1 + 1



# Save the results and plot them if "i" is a multiple of 1000
res[i, ] = c(n0, n1)
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



Here is an example of 25 results:
enter image description here



And a histogram of the percent of particles that were in the minor class by the end of each simulation:
enter image description here



So the results still agree with the simpler model in that such systems will tend to have a dominant species of particle.



Edit 4:



After further helpful conversation with chris he suggested that annihilation of more than one particle pair per step was the crucial added factor. Specifically that the number of removed particles should be a sample from the Poisson distribution with a mean proportional to the total number of particles, ie rpois(1, m*n0*n1) where m is small enough so that annihilation are very rare until a large number of matter and antimatter particles exist.



Here is the code (which is quite different from earlier):



require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5
m = 10^-4

# Run the simulation for 250k steps and
n_steps = 250e3

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar%
# Initialize output matrix
res = matrix(ncol = 3, nrow = n_steps)

for(i in 1:n_steps)
# Generate a new particle with 50/50 chance of matter/antimatter
x = sample(0:1, 1, prob = c(p0, 1 - p0))

# If "x" is a matter particle then...
if(x == 0)
n0 = n0 + 1


# If "x" is an antimatter particle then...
if(x == 1)
n1 = n1 + 1


# Delete number of particles proportional to the product of n0*n1
n_del = rpois(1, m*n0*n1)
n0 = max(0, n0 - n_del)
n1 = max(0, n1 - n_del)

# Save the results and plot them if "i" is a multiple of 1000
res[i, 1:2] = c(n0, n1)
res[i, 3] = min(res[i, 1:2])/sum(res[i, 1:2])
if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores))
# plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
# lines(res[1:i, 2], col = "Red", lwd = 3)
print(paste0(sim, ": ", i))


return(res)



And here are the results for various values of "m" (which controls how often annihilation occurs). This plot shows the average proportion of minor particles for each step (using 100 simulations per value of m) as the blue line, the green line is the median, and the bands are +/- 1 sd from the mean:



enter image description here



The first plot has the same behavior as my simulations, and you can see that as m gets smaller (annihilation rate as a function of number of particles becomes rarer) the system tends to stay in a more symmetric state (50/50 matter/antimatter), at least for more steps.



So a key assumption made by physicists seems to be that the annihilation rate in the early universe was very low, so that enough particles could accumulate until they became common enough that neither is likely to ever get totally "wiped out".



EDIT 5:



I ran one of those Poisson simulations for 8 million steps with m = 10^-6 and you can see that it just takes longer for the dominance to play out (it looks slightly different because the 1 sigma fill wouldn't plot with so many data points):
enter image description here



So from that I conclude the very low annihilation rates just delay how long it takes, rather than resulting in a fundamentally different outcome.



Edit 6:



Same thing happens with m = 10^-7 and 28 million steps. The aggregate chart looks the same as the above m = 10^-6 with 8 million steps. So here are some individual examples. You can see a clear trend towards a dominating species just as in the original model:
enter image description here



Edit 7:



To wrap this up... I think the answer to the question ("why do physicists think this?") is clear from my conversation with Chris here. Chris does not seem interested in making that into an answer but I will accept it if someone writes similar.







black-holes antimatter hawking-radiation baryogenesis






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Oct 8 at 22:57







Livid

















asked Sep 30 at 22:30









LividLivid

7125 silver badges12 bronze badges




7125 silver badges12 bronze badges














  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – Chris
    Oct 2 at 21:08






  • 1




    $begingroup$
    This example uses one type of particle/antiparticle. If it is charged, proton/antiproton, (which is your argument) then charge conservation and baryon number conservation are the same, which makes it is not physical If you look at the literature of "black hole baryogenesis" , extra assumptions are needed, model ( GUTS or somehting else) dependent for the models to be predictive, not just random walks.
    $endgroup$
    – anna v
    Oct 3 at 3:50







  • 1




    $begingroup$
    The above is true for a single black hole . There is no mechanism that ensures that for a statistical distribution of black holes it will be the same particle that will dominate the black hole. Statistics for two particles would say that it would be 50/50 , unless a model is used, and that is why the models are necessary.
    $endgroup$
    – anna v
    Oct 3 at 3:59










  • $begingroup$
    @annav Basically I am trying to keep it simple because I think you should need to come up with a mechanism that prevents a dominance between matter and anti-matter. It would not be difficult to add another set of particle pairs to this model, but I do not want to at this time. There are many other details of the calculation that are inconsistent with our universe already. The resuls are only qualitatively meaningful.
    $endgroup$
    – Livid
    Oct 3 at 4:32











  • $begingroup$
    I am trying to say that random walk around one black hole is not enough, as with many black holes both protons and antiprotons may be absorbed with a 50/50 chance, and that a physics model is needed that will introduce baryons and lead to baryon number violation
    $endgroup$
    – anna v
    Oct 3 at 4:49

















  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – Chris
    Oct 2 at 21:08






  • 1




    $begingroup$
    This example uses one type of particle/antiparticle. If it is charged, proton/antiproton, (which is your argument) then charge conservation and baryon number conservation are the same, which makes it is not physical If you look at the literature of "black hole baryogenesis" , extra assumptions are needed, model ( GUTS or somehting else) dependent for the models to be predictive, not just random walks.
    $endgroup$
    – anna v
    Oct 3 at 3:50







  • 1




    $begingroup$
    The above is true for a single black hole . There is no mechanism that ensures that for a statistical distribution of black holes it will be the same particle that will dominate the black hole. Statistics for two particles would say that it would be 50/50 , unless a model is used, and that is why the models are necessary.
    $endgroup$
    – anna v
    Oct 3 at 3:59










  • $begingroup$
    @annav Basically I am trying to keep it simple because I think you should need to come up with a mechanism that prevents a dominance between matter and anti-matter. It would not be difficult to add another set of particle pairs to this model, but I do not want to at this time. There are many other details of the calculation that are inconsistent with our universe already. The resuls are only qualitatively meaningful.
    $endgroup$
    – Livid
    Oct 3 at 4:32











  • $begingroup$
    I am trying to say that random walk around one black hole is not enough, as with many black holes both protons and antiprotons may be absorbed with a 50/50 chance, and that a physics model is needed that will introduce baryons and lead to baryon number violation
    $endgroup$
    – anna v
    Oct 3 at 4:49
















$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– Chris
Oct 2 at 21:08




$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– Chris
Oct 2 at 21:08




1




1




$begingroup$
This example uses one type of particle/antiparticle. If it is charged, proton/antiproton, (which is your argument) then charge conservation and baryon number conservation are the same, which makes it is not physical If you look at the literature of "black hole baryogenesis" , extra assumptions are needed, model ( GUTS or somehting else) dependent for the models to be predictive, not just random walks.
$endgroup$
– anna v
Oct 3 at 3:50





$begingroup$
This example uses one type of particle/antiparticle. If it is charged, proton/antiproton, (which is your argument) then charge conservation and baryon number conservation are the same, which makes it is not physical If you look at the literature of "black hole baryogenesis" , extra assumptions are needed, model ( GUTS or somehting else) dependent for the models to be predictive, not just random walks.
$endgroup$
– anna v
Oct 3 at 3:50





1




1




$begingroup$
The above is true for a single black hole . There is no mechanism that ensures that for a statistical distribution of black holes it will be the same particle that will dominate the black hole. Statistics for two particles would say that it would be 50/50 , unless a model is used, and that is why the models are necessary.
$endgroup$
– anna v
Oct 3 at 3:59




$begingroup$
The above is true for a single black hole . There is no mechanism that ensures that for a statistical distribution of black holes it will be the same particle that will dominate the black hole. Statistics for two particles would say that it would be 50/50 , unless a model is used, and that is why the models are necessary.
$endgroup$
– anna v
Oct 3 at 3:59












$begingroup$
@annav Basically I am trying to keep it simple because I think you should need to come up with a mechanism that prevents a dominance between matter and anti-matter. It would not be difficult to add another set of particle pairs to this model, but I do not want to at this time. There are many other details of the calculation that are inconsistent with our universe already. The resuls are only qualitatively meaningful.
$endgroup$
– Livid
Oct 3 at 4:32





$begingroup$
@annav Basically I am trying to keep it simple because I think you should need to come up with a mechanism that prevents a dominance between matter and anti-matter. It would not be difficult to add another set of particle pairs to this model, but I do not want to at this time. There are many other details of the calculation that are inconsistent with our universe already. The resuls are only qualitatively meaningful.
$endgroup$
– Livid
Oct 3 at 4:32













$begingroup$
I am trying to say that random walk around one black hole is not enough, as with many black holes both protons and antiprotons may be absorbed with a 50/50 chance, and that a physics model is needed that will introduce baryons and lead to baryon number violation
$endgroup$
– anna v
Oct 3 at 4:49





$begingroup$
I am trying to say that random walk around one black hole is not enough, as with many black holes both protons and antiprotons may be absorbed with a 50/50 chance, and that a physics model is needed that will introduce baryons and lead to baryon number violation
$endgroup$
– anna v
Oct 3 at 4:49











4 Answers
4






active

oldest

votes


















76

















$begingroup$

Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole.



Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field.



Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form.




However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence.



It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe.



What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.






share|cite|improve this answer












$endgroup$









  • 1




    $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:09






  • 14




    $begingroup$
    It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
    $endgroup$
    – Reinstate Monica --Brondahl--
    Oct 2 at 9:51






  • 3




    $begingroup$
    Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
    $endgroup$
    – ACuriousMind
    Oct 3 at 8:23


















30

















$begingroup$

Locality



The random walk would be expected to create different (opposing) asymetries in different regions, including regions that are distant enough to not affect each other. If this would be the main cause of asymetry, then we'd expect it to cause a predominance of matter in some areas of the observable universe and a predominance of antimatter in other areas of the observable universe.



However, we observe a global, universal predominance of matter that seems uniform across all the observable universe; as any boundary between matter and antimatter regions would create observable effects that don't seem to exist.






share|cite|improve this answer










$endgroup$









  • 4




    $begingroup$
    If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
    $endgroup$
    – Livid
    Oct 1 at 14:00






  • 5




    $begingroup$
    Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
    $endgroup$
    – Joshua
    Oct 1 at 20:09






  • 8




    $begingroup$
    @Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
    $endgroup$
    – Peteris
    Oct 1 at 21:59







  • 1




    $begingroup$
    Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
    $endgroup$
    – Joshua
    Oct 1 at 22:30







  • 1




    $begingroup$
    @Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
    $endgroup$
    – Luaan
    Oct 2 at 7:04


















22

















$begingroup$

Your simulation randomly creates a single particle that is either type 1 or type 2. If these two types are charged, then either of these two creation processes violates conservation of charge. Charge conservation is an absolute law of physics as far as we know, and this includes processes like the formation and evaporation of black holes.



The OP clarified in a comment:




Both particles are created, but one gets sucked into the black hole. The other stays outside. So the charge should be conserved.




Sucking a particle into a black hole doesn't mean that it's as if the particle has never existed. Its mass-energy, charge, and angular momentum are still present in the black hole. This is an example of the universality of charge conservation as a law of physics.



In your simulation, let's say that the particles represented by the black curve on your graph are electrons. Then the universe in your simulation has accumulated a big surplus of positively charged black holes. These positively charged black holes do not (on the average) emit electrically neutral Hawking radiation. By the time they evaporate completely, they will have emitted positively charged particles such as positrons in a quantity that exactly equals the number of positive charges they consumed. (I'm making these statements as if we knew for sure that Hawking radiation operated in a certain way. Actually we have no direct evidence, and the methods of semiclassical gravity have never been compared against observation in any way, so we can't really be sure that they work.)



You may want to look at the Sakharov conditions: https://en.wikipedia.org/wiki/Baryogenesis#GUT_Baryogenesis_under_Sakharov_conditions






share|cite|improve this answer












$endgroup$













  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:10


















4

















$begingroup$

Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron.



Does adding a black hole break that symmetry? Definitely not. Charge is still conserved - the positron that "falls" into the black hole doesn't just disappear - it alters the charge, angular momentum etc. of the entire black hole. So if the black hole absorbs an electron, it becomes slightly more negatively charged. The black hole isn't just as likely to absorb an electron as it is to absorb a positron - every unbalancing of the charge means that differently charged particles are attracted differently. If you black hole somehow managed to consume a billion electrons at once, it will repel electrons, while attracting positrons. The disbalance would quickly disappear.



But that's not how Hawking radiation works anyway (assuming it actually exists). There's no particle-anti-particle pair that gets created near the even horizon of the black hole, of which one is randomly created in a way that makes it fall into the hole, while the other escapes. If this was the right picture, Hawking radiation would be emitted in all kinds of warped space times, including the Earth's. The so-called virtual particles are disturbations in the underlying quantum fields that aren't particles. A real particle is a stable wave packet in the field which self-propagates, and if you ignore interactios with other fields, doesn't ever go away; it has a definite mass, electric charge etc.. Virtual particles are more like the water ripples you get when you throw a stone in a pond. A virtual electron doesn't have the electron's mass, or its charge, or anything else. It's not an electron that annihilates with a positron too fast to be observable - it isn't a particle at all. Sometimes, people confuse things like "2 photons -> 1 electron + 1 positron -> 2 photons" with virtual particles. There's no useful similarity between the two. A black hole is kind of like a hole in a drum skin. Its presence means that some vibrational modes aren't allowed, while others that wouldn't be allowed in an intact drum... are.



Of course, positrons aren't the only positively charged particle in the universe. Could the radiation be in the form of protons? Not really. Protons are really really massive compared to electrons and positrons. Even if you had a black hole small enough to have Hawking radiation energetic enough to produce protons, you would expect a thousand positrons for every proton produced, even assuming positrons and protons worked exactly the same way (which they don't). And of course, you would still expect anti-protons to be produced in the same quantities - the end result is still a universe with no matter or anti-matter.



This also means that it's actually pretty hard to make matter out of Hawking radiation! If you believe the popular description of Hawking radiation and virtual particles, you should expect that a bigger black hole produces a lot more radiation than a small black hole. The "spontaneous pair formation" should be the same in all of space-time, so all else equal, a bigger black hole should "capture" more of the pairs than a smaller black hole; if we ignore all the effects like charge and gravity, you should expect that the pair production would correspond to the surface area of the black hole (the event horizon).



But this isn't the case at all - smaller black holes are expected to emit far more radiation than large black holes. Not only that, but larger black holes emit far longer wavelengths (and thus less energetic particles, almost exclusively photons and perhaps neutrinos) than small black holes. Stellar mass black holes are way too massive to produce something like an electron - even Earth-mass black hole is way too much (about seven orders of magnitude too much). Needless to say, you should expect protons from even smaller black holes, and in vastly lower quantities than any electrons; and for a non-charged black hole, you will get roughly as many electrons as positrons. The symmetry is still there, because in the end, you're not randomly consuming one of a pair of spontaneously formed particles - you're distorting the oscillations in the electric quantum field, and the result depends on what the value of the field was in the first place.



To explain the dominance of matter in the universe, you need something that prefers matter. Even if mere statistical fluke was enough to explain the matter dominance, you wouldn't expect the universe to be uniformly made out of matter. You should expect pockets of matter, next to pockets of anti-matter. But that would be very obvious in our surveys, even from very far away - the interface between the matter-pocket and the anti-matter-pocket would glow like crazy. We don't see that.



But we've actually managed to find one thing in the universe that treats matter and anti-matter differently - the symmetry isn't perfect. This is the weak nuclear force, and physicists are very excited to learn more about this fundamental force which has been a bit neglected so far; and especially about its connection with the relatively recently confirmed Higgs field. I'm not going to elaborate, since this is already way too long :)



We still have one important thing left. You assume that if there was a lot of energy density in the early universe, this must have caused black holes to form. This is a common misunderstanding - black holes do not form as a result of high density of matter or energy. They require a density gradient. The distinction doesn't matter much when you're talking about the black holes we observe - you're essentially comparing the near-vacuum's energy density to a massively compacted stellar-masses of matter. But while the early universe, before baryogenesis, had a very high energy density, it was also incredibly uniform - it was (and still is) very, very flat. There was no "sloping" of the spacetime, and no black holes. You would first need significant deviations, localised pockets of low energy density - but we see in the cosmic background radiation that the universe was extremely uniform (at the point where it became largely transparent to radiation).






share|cite|improve this answer










$endgroup$













  • $begingroup$
    "Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
    $endgroup$
    – Livid
    Oct 3 at 9:42






  • 3




    $begingroup$
    "To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
    $endgroup$
    – MSalters
    Oct 3 at 10:24






  • 1




    $begingroup$
    @MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
    $endgroup$
    – Chris
    Oct 3 at 18:16






  • 1




    $begingroup$
    @Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
    $endgroup$
    – Chris
    Oct 3 at 20:36






  • 1




    $begingroup$
    @Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
    $endgroup$
    – Chris
    Oct 3 at 21:00















4 Answers
4






active

oldest

votes








4 Answers
4






active

oldest

votes









active

oldest

votes






active

oldest

votes









76

















$begingroup$

Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole.



Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field.



Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form.




However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence.



It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe.



What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.






share|cite|improve this answer












$endgroup$









  • 1




    $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:09






  • 14




    $begingroup$
    It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
    $endgroup$
    – Reinstate Monica --Brondahl--
    Oct 2 at 9:51






  • 3




    $begingroup$
    Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
    $endgroup$
    – ACuriousMind
    Oct 3 at 8:23















76

















$begingroup$

Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole.



Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field.



Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form.




However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence.



It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe.



What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.






share|cite|improve this answer












$endgroup$









  • 1




    $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:09






  • 14




    $begingroup$
    It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
    $endgroup$
    – Reinstate Monica --Brondahl--
    Oct 2 at 9:51






  • 3




    $begingroup$
    Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
    $endgroup$
    – ACuriousMind
    Oct 3 at 8:23













76















76











76







$begingroup$

Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole.



Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field.



Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form.




However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence.



It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe.



What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.






share|cite|improve this answer












$endgroup$



Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole.



Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field.



Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form.




However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence.



It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe.



What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.







share|cite|improve this answer















share|cite|improve this answer




share|cite|improve this answer








edited Oct 3 at 0:37

























answered Oct 1 at 1:58









knzhouknzhou

64.6k15 gold badges169 silver badges281 bronze badges




64.6k15 gold badges169 silver badges281 bronze badges










  • 1




    $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:09






  • 14




    $begingroup$
    It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
    $endgroup$
    – Reinstate Monica --Brondahl--
    Oct 2 at 9:51






  • 3




    $begingroup$
    Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
    $endgroup$
    – ACuriousMind
    Oct 3 at 8:23












  • 1




    $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:09






  • 14




    $begingroup$
    It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
    $endgroup$
    – Reinstate Monica --Brondahl--
    Oct 2 at 9:51






  • 3




    $begingroup$
    Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
    $endgroup$
    – ACuriousMind
    Oct 3 at 8:23







1




1




$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– tpg2114
Oct 1 at 23:09




$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– tpg2114
Oct 1 at 23:09




14




14




$begingroup$
It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
$endgroup$
– Reinstate Monica --Brondahl--
Oct 2 at 9:51




$begingroup$
It sounds like the OP asked "Why is X surprising, when I can construct a perfectly reasonable explanation for it?". And your response is "There are a bunch of possible explanations for it." That doesn't directly answer the OP's question. It sounds like the direct answer is "The existence of an imbalance isn't itself surprising ... its just that we don't specifically know why it exists" ... In your analogy, we aren't surprised by the assertion that the IS something under the magician's hat, ... we just don't know what that thing is.
$endgroup$
– Reinstate Monica --Brondahl--
Oct 2 at 9:51




3




3




$begingroup$
Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
$endgroup$
– ACuriousMind
Oct 3 at 8:23




$begingroup$
Deleted a further round of extended discussion in comments. If anyone wants to have a conversation here, take it to chat.
$endgroup$
– ACuriousMind
Oct 3 at 8:23













30

















$begingroup$

Locality



The random walk would be expected to create different (opposing) asymetries in different regions, including regions that are distant enough to not affect each other. If this would be the main cause of asymetry, then we'd expect it to cause a predominance of matter in some areas of the observable universe and a predominance of antimatter in other areas of the observable universe.



However, we observe a global, universal predominance of matter that seems uniform across all the observable universe; as any boundary between matter and antimatter regions would create observable effects that don't seem to exist.






share|cite|improve this answer










$endgroup$









  • 4




    $begingroup$
    If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
    $endgroup$
    – Livid
    Oct 1 at 14:00






  • 5




    $begingroup$
    Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
    $endgroup$
    – Joshua
    Oct 1 at 20:09






  • 8




    $begingroup$
    @Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
    $endgroup$
    – Peteris
    Oct 1 at 21:59







  • 1




    $begingroup$
    Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
    $endgroup$
    – Joshua
    Oct 1 at 22:30







  • 1




    $begingroup$
    @Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
    $endgroup$
    – Luaan
    Oct 2 at 7:04















30

















$begingroup$

Locality



The random walk would be expected to create different (opposing) asymetries in different regions, including regions that are distant enough to not affect each other. If this would be the main cause of asymetry, then we'd expect it to cause a predominance of matter in some areas of the observable universe and a predominance of antimatter in other areas of the observable universe.



However, we observe a global, universal predominance of matter that seems uniform across all the observable universe; as any boundary between matter and antimatter regions would create observable effects that don't seem to exist.






share|cite|improve this answer










$endgroup$









  • 4




    $begingroup$
    If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
    $endgroup$
    – Livid
    Oct 1 at 14:00






  • 5




    $begingroup$
    Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
    $endgroup$
    – Joshua
    Oct 1 at 20:09






  • 8




    $begingroup$
    @Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
    $endgroup$
    – Peteris
    Oct 1 at 21:59







  • 1




    $begingroup$
    Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
    $endgroup$
    – Joshua
    Oct 1 at 22:30







  • 1




    $begingroup$
    @Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
    $endgroup$
    – Luaan
    Oct 2 at 7:04













30















30











30







$begingroup$

Locality



The random walk would be expected to create different (opposing) asymetries in different regions, including regions that are distant enough to not affect each other. If this would be the main cause of asymetry, then we'd expect it to cause a predominance of matter in some areas of the observable universe and a predominance of antimatter in other areas of the observable universe.



However, we observe a global, universal predominance of matter that seems uniform across all the observable universe; as any boundary between matter and antimatter regions would create observable effects that don't seem to exist.






share|cite|improve this answer










$endgroup$



Locality



The random walk would be expected to create different (opposing) asymetries in different regions, including regions that are distant enough to not affect each other. If this would be the main cause of asymetry, then we'd expect it to cause a predominance of matter in some areas of the observable universe and a predominance of antimatter in other areas of the observable universe.



However, we observe a global, universal predominance of matter that seems uniform across all the observable universe; as any boundary between matter and antimatter regions would create observable effects that don't seem to exist.







share|cite|improve this answer













share|cite|improve this answer




share|cite|improve this answer










answered Oct 1 at 13:52









PeterisPeteris

1,1388 silver badges8 bronze badges




1,1388 silver badges8 bronze badges










  • 4




    $begingroup$
    If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
    $endgroup$
    – Livid
    Oct 1 at 14:00






  • 5




    $begingroup$
    Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
    $endgroup$
    – Joshua
    Oct 1 at 20:09






  • 8




    $begingroup$
    @Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
    $endgroup$
    – Peteris
    Oct 1 at 21:59







  • 1




    $begingroup$
    Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
    $endgroup$
    – Joshua
    Oct 1 at 22:30







  • 1




    $begingroup$
    @Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
    $endgroup$
    – Luaan
    Oct 2 at 7:04












  • 4




    $begingroup$
    If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
    $endgroup$
    – Livid
    Oct 1 at 14:00






  • 5




    $begingroup$
    Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
    $endgroup$
    – Joshua
    Oct 1 at 20:09






  • 8




    $begingroup$
    @Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
    $endgroup$
    – Peteris
    Oct 1 at 21:59







  • 1




    $begingroup$
    Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
    $endgroup$
    – Joshua
    Oct 1 at 22:30







  • 1




    $begingroup$
    @Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
    $endgroup$
    – Luaan
    Oct 2 at 7:04







4




4




$begingroup$
If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
$endgroup$
– Livid
Oct 1 at 14:00




$begingroup$
If "black hole baryogenesis" was most common soon after the big bang then I would expect the results to be uniform today. Basically I am imagining that the rate of baryogenesis is much lower today than in the past because there are fewer black holes (because the universe has become less dense).
$endgroup$
– Livid
Oct 1 at 14:00




5




5




$begingroup$
Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
$endgroup$
– Joshua
Oct 1 at 20:09




$begingroup$
Since you think we can observe this, I want to know how we can observe the matter/antimatterness of a galaxy outside the local group.
$endgroup$
– Joshua
Oct 1 at 20:09




8




8




$begingroup$
@Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
$endgroup$
– Peteris
Oct 1 at 21:59





$begingroup$
@Joshua the gist is that while interstellar space is very, very sparse, if in two neighbouring galaxies (and their surrounding regions of space) one was predominantly matter and the other was antimatter, then on the boundary between the "matter region" and "antimatter region", even if that boundary (the not-a-galaxy "empty" space) is as sparse as it gets, that boundary region is unavoidably sufficiently large so that the annihilation events and their energy signature would be frequent enough to be clearly observable and unusual enough so that they would not get confused with something else.
$endgroup$
– Peteris
Oct 1 at 21:59





1




1




$begingroup$
Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
$endgroup$
– Joshua
Oct 1 at 22:30





$begingroup$
Are you willing to say the same for interactions between galaxy groups? Interaction between groups falls off pretty fast.
$endgroup$
– Joshua
Oct 1 at 22:30





1




1




$begingroup$
@Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
$endgroup$
– Luaan
Oct 2 at 7:04




$begingroup$
@Livid Black holes have nothing to do with absolute density; it's about density gradients. When inflation ended, energy density was huge, yes - but it was also essentially the same everywhere. There's no reason to expect there were more black holes than today, especially the kind that doesn't evaporate in a very short time. Keep in mind that charge is conserved - the escaping -1 of a single electron is balanced by an extra +1 on the black hole - which of course both makes the black hole repel further positrons and attract electrons, and when Hawking radiation is emitted, it is not neutral.
$endgroup$
– Luaan
Oct 2 at 7:04











22

















$begingroup$

Your simulation randomly creates a single particle that is either type 1 or type 2. If these two types are charged, then either of these two creation processes violates conservation of charge. Charge conservation is an absolute law of physics as far as we know, and this includes processes like the formation and evaporation of black holes.



The OP clarified in a comment:




Both particles are created, but one gets sucked into the black hole. The other stays outside. So the charge should be conserved.




Sucking a particle into a black hole doesn't mean that it's as if the particle has never existed. Its mass-energy, charge, and angular momentum are still present in the black hole. This is an example of the universality of charge conservation as a law of physics.



In your simulation, let's say that the particles represented by the black curve on your graph are electrons. Then the universe in your simulation has accumulated a big surplus of positively charged black holes. These positively charged black holes do not (on the average) emit electrically neutral Hawking radiation. By the time they evaporate completely, they will have emitted positively charged particles such as positrons in a quantity that exactly equals the number of positive charges they consumed. (I'm making these statements as if we knew for sure that Hawking radiation operated in a certain way. Actually we have no direct evidence, and the methods of semiclassical gravity have never been compared against observation in any way, so we can't really be sure that they work.)



You may want to look at the Sakharov conditions: https://en.wikipedia.org/wiki/Baryogenesis#GUT_Baryogenesis_under_Sakharov_conditions






share|cite|improve this answer












$endgroup$













  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:10















22

















$begingroup$

Your simulation randomly creates a single particle that is either type 1 or type 2. If these two types are charged, then either of these two creation processes violates conservation of charge. Charge conservation is an absolute law of physics as far as we know, and this includes processes like the formation and evaporation of black holes.



The OP clarified in a comment:




Both particles are created, but one gets sucked into the black hole. The other stays outside. So the charge should be conserved.




Sucking a particle into a black hole doesn't mean that it's as if the particle has never existed. Its mass-energy, charge, and angular momentum are still present in the black hole. This is an example of the universality of charge conservation as a law of physics.



In your simulation, let's say that the particles represented by the black curve on your graph are electrons. Then the universe in your simulation has accumulated a big surplus of positively charged black holes. These positively charged black holes do not (on the average) emit electrically neutral Hawking radiation. By the time they evaporate completely, they will have emitted positively charged particles such as positrons in a quantity that exactly equals the number of positive charges they consumed. (I'm making these statements as if we knew for sure that Hawking radiation operated in a certain way. Actually we have no direct evidence, and the methods of semiclassical gravity have never been compared against observation in any way, so we can't really be sure that they work.)



You may want to look at the Sakharov conditions: https://en.wikipedia.org/wiki/Baryogenesis#GUT_Baryogenesis_under_Sakharov_conditions






share|cite|improve this answer












$endgroup$













  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:10













22















22











22







$begingroup$

Your simulation randomly creates a single particle that is either type 1 or type 2. If these two types are charged, then either of these two creation processes violates conservation of charge. Charge conservation is an absolute law of physics as far as we know, and this includes processes like the formation and evaporation of black holes.



The OP clarified in a comment:




Both particles are created, but one gets sucked into the black hole. The other stays outside. So the charge should be conserved.




Sucking a particle into a black hole doesn't mean that it's as if the particle has never existed. Its mass-energy, charge, and angular momentum are still present in the black hole. This is an example of the universality of charge conservation as a law of physics.



In your simulation, let's say that the particles represented by the black curve on your graph are electrons. Then the universe in your simulation has accumulated a big surplus of positively charged black holes. These positively charged black holes do not (on the average) emit electrically neutral Hawking radiation. By the time they evaporate completely, they will have emitted positively charged particles such as positrons in a quantity that exactly equals the number of positive charges they consumed. (I'm making these statements as if we knew for sure that Hawking radiation operated in a certain way. Actually we have no direct evidence, and the methods of semiclassical gravity have never been compared against observation in any way, so we can't really be sure that they work.)



You may want to look at the Sakharov conditions: https://en.wikipedia.org/wiki/Baryogenesis#GUT_Baryogenesis_under_Sakharov_conditions






share|cite|improve this answer












$endgroup$



Your simulation randomly creates a single particle that is either type 1 or type 2. If these two types are charged, then either of these two creation processes violates conservation of charge. Charge conservation is an absolute law of physics as far as we know, and this includes processes like the formation and evaporation of black holes.



The OP clarified in a comment:




Both particles are created, but one gets sucked into the black hole. The other stays outside. So the charge should be conserved.




Sucking a particle into a black hole doesn't mean that it's as if the particle has never existed. Its mass-energy, charge, and angular momentum are still present in the black hole. This is an example of the universality of charge conservation as a law of physics.



In your simulation, let's say that the particles represented by the black curve on your graph are electrons. Then the universe in your simulation has accumulated a big surplus of positively charged black holes. These positively charged black holes do not (on the average) emit electrically neutral Hawking radiation. By the time they evaporate completely, they will have emitted positively charged particles such as positrons in a quantity that exactly equals the number of positive charges they consumed. (I'm making these statements as if we knew for sure that Hawking radiation operated in a certain way. Actually we have no direct evidence, and the methods of semiclassical gravity have never been compared against observation in any way, so we can't really be sure that they work.)



You may want to look at the Sakharov conditions: https://en.wikipedia.org/wiki/Baryogenesis#GUT_Baryogenesis_under_Sakharov_conditions







share|cite|improve this answer















share|cite|improve this answer




share|cite|improve this answer








edited Sep 30 at 23:24

























answered Sep 30 at 22:53









Ben CrowellBen Crowell

72.4k8 gold badges189 silver badges365 bronze badges




72.4k8 gold badges189 silver badges365 bronze badges














  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:10
















  • $begingroup$
    Comments are not for extended discussion; this conversation has been moved to chat.
    $endgroup$
    – tpg2114
    Oct 1 at 23:10















$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– tpg2114
Oct 1 at 23:10




$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– tpg2114
Oct 1 at 23:10











4

















$begingroup$

Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron.



Does adding a black hole break that symmetry? Definitely not. Charge is still conserved - the positron that "falls" into the black hole doesn't just disappear - it alters the charge, angular momentum etc. of the entire black hole. So if the black hole absorbs an electron, it becomes slightly more negatively charged. The black hole isn't just as likely to absorb an electron as it is to absorb a positron - every unbalancing of the charge means that differently charged particles are attracted differently. If you black hole somehow managed to consume a billion electrons at once, it will repel electrons, while attracting positrons. The disbalance would quickly disappear.



But that's not how Hawking radiation works anyway (assuming it actually exists). There's no particle-anti-particle pair that gets created near the even horizon of the black hole, of which one is randomly created in a way that makes it fall into the hole, while the other escapes. If this was the right picture, Hawking radiation would be emitted in all kinds of warped space times, including the Earth's. The so-called virtual particles are disturbations in the underlying quantum fields that aren't particles. A real particle is a stable wave packet in the field which self-propagates, and if you ignore interactios with other fields, doesn't ever go away; it has a definite mass, electric charge etc.. Virtual particles are more like the water ripples you get when you throw a stone in a pond. A virtual electron doesn't have the electron's mass, or its charge, or anything else. It's not an electron that annihilates with a positron too fast to be observable - it isn't a particle at all. Sometimes, people confuse things like "2 photons -> 1 electron + 1 positron -> 2 photons" with virtual particles. There's no useful similarity between the two. A black hole is kind of like a hole in a drum skin. Its presence means that some vibrational modes aren't allowed, while others that wouldn't be allowed in an intact drum... are.



Of course, positrons aren't the only positively charged particle in the universe. Could the radiation be in the form of protons? Not really. Protons are really really massive compared to electrons and positrons. Even if you had a black hole small enough to have Hawking radiation energetic enough to produce protons, you would expect a thousand positrons for every proton produced, even assuming positrons and protons worked exactly the same way (which they don't). And of course, you would still expect anti-protons to be produced in the same quantities - the end result is still a universe with no matter or anti-matter.



This also means that it's actually pretty hard to make matter out of Hawking radiation! If you believe the popular description of Hawking radiation and virtual particles, you should expect that a bigger black hole produces a lot more radiation than a small black hole. The "spontaneous pair formation" should be the same in all of space-time, so all else equal, a bigger black hole should "capture" more of the pairs than a smaller black hole; if we ignore all the effects like charge and gravity, you should expect that the pair production would correspond to the surface area of the black hole (the event horizon).



But this isn't the case at all - smaller black holes are expected to emit far more radiation than large black holes. Not only that, but larger black holes emit far longer wavelengths (and thus less energetic particles, almost exclusively photons and perhaps neutrinos) than small black holes. Stellar mass black holes are way too massive to produce something like an electron - even Earth-mass black hole is way too much (about seven orders of magnitude too much). Needless to say, you should expect protons from even smaller black holes, and in vastly lower quantities than any electrons; and for a non-charged black hole, you will get roughly as many electrons as positrons. The symmetry is still there, because in the end, you're not randomly consuming one of a pair of spontaneously formed particles - you're distorting the oscillations in the electric quantum field, and the result depends on what the value of the field was in the first place.



To explain the dominance of matter in the universe, you need something that prefers matter. Even if mere statistical fluke was enough to explain the matter dominance, you wouldn't expect the universe to be uniformly made out of matter. You should expect pockets of matter, next to pockets of anti-matter. But that would be very obvious in our surveys, even from very far away - the interface between the matter-pocket and the anti-matter-pocket would glow like crazy. We don't see that.



But we've actually managed to find one thing in the universe that treats matter and anti-matter differently - the symmetry isn't perfect. This is the weak nuclear force, and physicists are very excited to learn more about this fundamental force which has been a bit neglected so far; and especially about its connection with the relatively recently confirmed Higgs field. I'm not going to elaborate, since this is already way too long :)



We still have one important thing left. You assume that if there was a lot of energy density in the early universe, this must have caused black holes to form. This is a common misunderstanding - black holes do not form as a result of high density of matter or energy. They require a density gradient. The distinction doesn't matter much when you're talking about the black holes we observe - you're essentially comparing the near-vacuum's energy density to a massively compacted stellar-masses of matter. But while the early universe, before baryogenesis, had a very high energy density, it was also incredibly uniform - it was (and still is) very, very flat. There was no "sloping" of the spacetime, and no black holes. You would first need significant deviations, localised pockets of low energy density - but we see in the cosmic background radiation that the universe was extremely uniform (at the point where it became largely transparent to radiation).






share|cite|improve this answer










$endgroup$













  • $begingroup$
    "Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
    $endgroup$
    – Livid
    Oct 3 at 9:42






  • 3




    $begingroup$
    "To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
    $endgroup$
    – MSalters
    Oct 3 at 10:24






  • 1




    $begingroup$
    @MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
    $endgroup$
    – Chris
    Oct 3 at 18:16






  • 1




    $begingroup$
    @Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
    $endgroup$
    – Chris
    Oct 3 at 20:36






  • 1




    $begingroup$
    @Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
    $endgroup$
    – Chris
    Oct 3 at 21:00















4

















$begingroup$

Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron.



Does adding a black hole break that symmetry? Definitely not. Charge is still conserved - the positron that "falls" into the black hole doesn't just disappear - it alters the charge, angular momentum etc. of the entire black hole. So if the black hole absorbs an electron, it becomes slightly more negatively charged. The black hole isn't just as likely to absorb an electron as it is to absorb a positron - every unbalancing of the charge means that differently charged particles are attracted differently. If you black hole somehow managed to consume a billion electrons at once, it will repel electrons, while attracting positrons. The disbalance would quickly disappear.



But that's not how Hawking radiation works anyway (assuming it actually exists). There's no particle-anti-particle pair that gets created near the even horizon of the black hole, of which one is randomly created in a way that makes it fall into the hole, while the other escapes. If this was the right picture, Hawking radiation would be emitted in all kinds of warped space times, including the Earth's. The so-called virtual particles are disturbations in the underlying quantum fields that aren't particles. A real particle is a stable wave packet in the field which self-propagates, and if you ignore interactios with other fields, doesn't ever go away; it has a definite mass, electric charge etc.. Virtual particles are more like the water ripples you get when you throw a stone in a pond. A virtual electron doesn't have the electron's mass, or its charge, or anything else. It's not an electron that annihilates with a positron too fast to be observable - it isn't a particle at all. Sometimes, people confuse things like "2 photons -> 1 electron + 1 positron -> 2 photons" with virtual particles. There's no useful similarity between the two. A black hole is kind of like a hole in a drum skin. Its presence means that some vibrational modes aren't allowed, while others that wouldn't be allowed in an intact drum... are.



Of course, positrons aren't the only positively charged particle in the universe. Could the radiation be in the form of protons? Not really. Protons are really really massive compared to electrons and positrons. Even if you had a black hole small enough to have Hawking radiation energetic enough to produce protons, you would expect a thousand positrons for every proton produced, even assuming positrons and protons worked exactly the same way (which they don't). And of course, you would still expect anti-protons to be produced in the same quantities - the end result is still a universe with no matter or anti-matter.



This also means that it's actually pretty hard to make matter out of Hawking radiation! If you believe the popular description of Hawking radiation and virtual particles, you should expect that a bigger black hole produces a lot more radiation than a small black hole. The "spontaneous pair formation" should be the same in all of space-time, so all else equal, a bigger black hole should "capture" more of the pairs than a smaller black hole; if we ignore all the effects like charge and gravity, you should expect that the pair production would correspond to the surface area of the black hole (the event horizon).



But this isn't the case at all - smaller black holes are expected to emit far more radiation than large black holes. Not only that, but larger black holes emit far longer wavelengths (and thus less energetic particles, almost exclusively photons and perhaps neutrinos) than small black holes. Stellar mass black holes are way too massive to produce something like an electron - even Earth-mass black hole is way too much (about seven orders of magnitude too much). Needless to say, you should expect protons from even smaller black holes, and in vastly lower quantities than any electrons; and for a non-charged black hole, you will get roughly as many electrons as positrons. The symmetry is still there, because in the end, you're not randomly consuming one of a pair of spontaneously formed particles - you're distorting the oscillations in the electric quantum field, and the result depends on what the value of the field was in the first place.



To explain the dominance of matter in the universe, you need something that prefers matter. Even if mere statistical fluke was enough to explain the matter dominance, you wouldn't expect the universe to be uniformly made out of matter. You should expect pockets of matter, next to pockets of anti-matter. But that would be very obvious in our surveys, even from very far away - the interface between the matter-pocket and the anti-matter-pocket would glow like crazy. We don't see that.



But we've actually managed to find one thing in the universe that treats matter and anti-matter differently - the symmetry isn't perfect. This is the weak nuclear force, and physicists are very excited to learn more about this fundamental force which has been a bit neglected so far; and especially about its connection with the relatively recently confirmed Higgs field. I'm not going to elaborate, since this is already way too long :)



We still have one important thing left. You assume that if there was a lot of energy density in the early universe, this must have caused black holes to form. This is a common misunderstanding - black holes do not form as a result of high density of matter or energy. They require a density gradient. The distinction doesn't matter much when you're talking about the black holes we observe - you're essentially comparing the near-vacuum's energy density to a massively compacted stellar-masses of matter. But while the early universe, before baryogenesis, had a very high energy density, it was also incredibly uniform - it was (and still is) very, very flat. There was no "sloping" of the spacetime, and no black holes. You would first need significant deviations, localised pockets of low energy density - but we see in the cosmic background radiation that the universe was extremely uniform (at the point where it became largely transparent to radiation).






share|cite|improve this answer










$endgroup$













  • $begingroup$
    "Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
    $endgroup$
    – Livid
    Oct 3 at 9:42






  • 3




    $begingroup$
    "To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
    $endgroup$
    – MSalters
    Oct 3 at 10:24






  • 1




    $begingroup$
    @MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
    $endgroup$
    – Chris
    Oct 3 at 18:16






  • 1




    $begingroup$
    @Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
    $endgroup$
    – Chris
    Oct 3 at 20:36






  • 1




    $begingroup$
    @Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
    $endgroup$
    – Chris
    Oct 3 at 21:00













4















4











4







$begingroup$

Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron.



Does adding a black hole break that symmetry? Definitely not. Charge is still conserved - the positron that "falls" into the black hole doesn't just disappear - it alters the charge, angular momentum etc. of the entire black hole. So if the black hole absorbs an electron, it becomes slightly more negatively charged. The black hole isn't just as likely to absorb an electron as it is to absorb a positron - every unbalancing of the charge means that differently charged particles are attracted differently. If you black hole somehow managed to consume a billion electrons at once, it will repel electrons, while attracting positrons. The disbalance would quickly disappear.



But that's not how Hawking radiation works anyway (assuming it actually exists). There's no particle-anti-particle pair that gets created near the even horizon of the black hole, of which one is randomly created in a way that makes it fall into the hole, while the other escapes. If this was the right picture, Hawking radiation would be emitted in all kinds of warped space times, including the Earth's. The so-called virtual particles are disturbations in the underlying quantum fields that aren't particles. A real particle is a stable wave packet in the field which self-propagates, and if you ignore interactios with other fields, doesn't ever go away; it has a definite mass, electric charge etc.. Virtual particles are more like the water ripples you get when you throw a stone in a pond. A virtual electron doesn't have the electron's mass, or its charge, or anything else. It's not an electron that annihilates with a positron too fast to be observable - it isn't a particle at all. Sometimes, people confuse things like "2 photons -> 1 electron + 1 positron -> 2 photons" with virtual particles. There's no useful similarity between the two. A black hole is kind of like a hole in a drum skin. Its presence means that some vibrational modes aren't allowed, while others that wouldn't be allowed in an intact drum... are.



Of course, positrons aren't the only positively charged particle in the universe. Could the radiation be in the form of protons? Not really. Protons are really really massive compared to electrons and positrons. Even if you had a black hole small enough to have Hawking radiation energetic enough to produce protons, you would expect a thousand positrons for every proton produced, even assuming positrons and protons worked exactly the same way (which they don't). And of course, you would still expect anti-protons to be produced in the same quantities - the end result is still a universe with no matter or anti-matter.



This also means that it's actually pretty hard to make matter out of Hawking radiation! If you believe the popular description of Hawking radiation and virtual particles, you should expect that a bigger black hole produces a lot more radiation than a small black hole. The "spontaneous pair formation" should be the same in all of space-time, so all else equal, a bigger black hole should "capture" more of the pairs than a smaller black hole; if we ignore all the effects like charge and gravity, you should expect that the pair production would correspond to the surface area of the black hole (the event horizon).



But this isn't the case at all - smaller black holes are expected to emit far more radiation than large black holes. Not only that, but larger black holes emit far longer wavelengths (and thus less energetic particles, almost exclusively photons and perhaps neutrinos) than small black holes. Stellar mass black holes are way too massive to produce something like an electron - even Earth-mass black hole is way too much (about seven orders of magnitude too much). Needless to say, you should expect protons from even smaller black holes, and in vastly lower quantities than any electrons; and for a non-charged black hole, you will get roughly as many electrons as positrons. The symmetry is still there, because in the end, you're not randomly consuming one of a pair of spontaneously formed particles - you're distorting the oscillations in the electric quantum field, and the result depends on what the value of the field was in the first place.



To explain the dominance of matter in the universe, you need something that prefers matter. Even if mere statistical fluke was enough to explain the matter dominance, you wouldn't expect the universe to be uniformly made out of matter. You should expect pockets of matter, next to pockets of anti-matter. But that would be very obvious in our surveys, even from very far away - the interface between the matter-pocket and the anti-matter-pocket would glow like crazy. We don't see that.



But we've actually managed to find one thing in the universe that treats matter and anti-matter differently - the symmetry isn't perfect. This is the weak nuclear force, and physicists are very excited to learn more about this fundamental force which has been a bit neglected so far; and especially about its connection with the relatively recently confirmed Higgs field. I'm not going to elaborate, since this is already way too long :)



We still have one important thing left. You assume that if there was a lot of energy density in the early universe, this must have caused black holes to form. This is a common misunderstanding - black holes do not form as a result of high density of matter or energy. They require a density gradient. The distinction doesn't matter much when you're talking about the black holes we observe - you're essentially comparing the near-vacuum's energy density to a massively compacted stellar-masses of matter. But while the early universe, before baryogenesis, had a very high energy density, it was also incredibly uniform - it was (and still is) very, very flat. There was no "sloping" of the spacetime, and no black holes. You would first need significant deviations, localised pockets of low energy density - but we see in the cosmic background radiation that the universe was extremely uniform (at the point where it became largely transparent to radiation).






share|cite|improve this answer










$endgroup$



Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron.



Does adding a black hole break that symmetry? Definitely not. Charge is still conserved - the positron that "falls" into the black hole doesn't just disappear - it alters the charge, angular momentum etc. of the entire black hole. So if the black hole absorbs an electron, it becomes slightly more negatively charged. The black hole isn't just as likely to absorb an electron as it is to absorb a positron - every unbalancing of the charge means that differently charged particles are attracted differently. If you black hole somehow managed to consume a billion electrons at once, it will repel electrons, while attracting positrons. The disbalance would quickly disappear.



But that's not how Hawking radiation works anyway (assuming it actually exists). There's no particle-anti-particle pair that gets created near the even horizon of the black hole, of which one is randomly created in a way that makes it fall into the hole, while the other escapes. If this was the right picture, Hawking radiation would be emitted in all kinds of warped space times, including the Earth's. The so-called virtual particles are disturbations in the underlying quantum fields that aren't particles. A real particle is a stable wave packet in the field which self-propagates, and if you ignore interactios with other fields, doesn't ever go away; it has a definite mass, electric charge etc.. Virtual particles are more like the water ripples you get when you throw a stone in a pond. A virtual electron doesn't have the electron's mass, or its charge, or anything else. It's not an electron that annihilates with a positron too fast to be observable - it isn't a particle at all. Sometimes, people confuse things like "2 photons -> 1 electron + 1 positron -> 2 photons" with virtual particles. There's no useful similarity between the two. A black hole is kind of like a hole in a drum skin. Its presence means that some vibrational modes aren't allowed, while others that wouldn't be allowed in an intact drum... are.



Of course, positrons aren't the only positively charged particle in the universe. Could the radiation be in the form of protons? Not really. Protons are really really massive compared to electrons and positrons. Even if you had a black hole small enough to have Hawking radiation energetic enough to produce protons, you would expect a thousand positrons for every proton produced, even assuming positrons and protons worked exactly the same way (which they don't). And of course, you would still expect anti-protons to be produced in the same quantities - the end result is still a universe with no matter or anti-matter.



This also means that it's actually pretty hard to make matter out of Hawking radiation! If you believe the popular description of Hawking radiation and virtual particles, you should expect that a bigger black hole produces a lot more radiation than a small black hole. The "spontaneous pair formation" should be the same in all of space-time, so all else equal, a bigger black hole should "capture" more of the pairs than a smaller black hole; if we ignore all the effects like charge and gravity, you should expect that the pair production would correspond to the surface area of the black hole (the event horizon).



But this isn't the case at all - smaller black holes are expected to emit far more radiation than large black holes. Not only that, but larger black holes emit far longer wavelengths (and thus less energetic particles, almost exclusively photons and perhaps neutrinos) than small black holes. Stellar mass black holes are way too massive to produce something like an electron - even Earth-mass black hole is way too much (about seven orders of magnitude too much). Needless to say, you should expect protons from even smaller black holes, and in vastly lower quantities than any electrons; and for a non-charged black hole, you will get roughly as many electrons as positrons. The symmetry is still there, because in the end, you're not randomly consuming one of a pair of spontaneously formed particles - you're distorting the oscillations in the electric quantum field, and the result depends on what the value of the field was in the first place.



To explain the dominance of matter in the universe, you need something that prefers matter. Even if mere statistical fluke was enough to explain the matter dominance, you wouldn't expect the universe to be uniformly made out of matter. You should expect pockets of matter, next to pockets of anti-matter. But that would be very obvious in our surveys, even from very far away - the interface between the matter-pocket and the anti-matter-pocket would glow like crazy. We don't see that.



But we've actually managed to find one thing in the universe that treats matter and anti-matter differently - the symmetry isn't perfect. This is the weak nuclear force, and physicists are very excited to learn more about this fundamental force which has been a bit neglected so far; and especially about its connection with the relatively recently confirmed Higgs field. I'm not going to elaborate, since this is already way too long :)



We still have one important thing left. You assume that if there was a lot of energy density in the early universe, this must have caused black holes to form. This is a common misunderstanding - black holes do not form as a result of high density of matter or energy. They require a density gradient. The distinction doesn't matter much when you're talking about the black holes we observe - you're essentially comparing the near-vacuum's energy density to a massively compacted stellar-masses of matter. But while the early universe, before baryogenesis, had a very high energy density, it was also incredibly uniform - it was (and still is) very, very flat. There was no "sloping" of the spacetime, and no black holes. You would first need significant deviations, localised pockets of low energy density - but we see in the cosmic background radiation that the universe was extremely uniform (at the point where it became largely transparent to radiation).







share|cite|improve this answer













share|cite|improve this answer




share|cite|improve this answer










answered Oct 3 at 8:03









LuaanLuaan

5,34716 silver badges25 bronze badges




5,34716 silver badges25 bronze badges














  • $begingroup$
    "Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
    $endgroup$
    – Livid
    Oct 3 at 9:42






  • 3




    $begingroup$
    "To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
    $endgroup$
    – MSalters
    Oct 3 at 10:24






  • 1




    $begingroup$
    @MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
    $endgroup$
    – Chris
    Oct 3 at 18:16






  • 1




    $begingroup$
    @Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
    $endgroup$
    – Chris
    Oct 3 at 20:36






  • 1




    $begingroup$
    @Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
    $endgroup$
    – Chris
    Oct 3 at 21:00
















  • $begingroup$
    "Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
    $endgroup$
    – Livid
    Oct 3 at 9:42






  • 3




    $begingroup$
    "To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
    $endgroup$
    – MSalters
    Oct 3 at 10:24






  • 1




    $begingroup$
    @MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
    $endgroup$
    – Chris
    Oct 3 at 18:16






  • 1




    $begingroup$
    @Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
    $endgroup$
    – Chris
    Oct 3 at 20:36






  • 1




    $begingroup$
    @Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
    $endgroup$
    – Chris
    Oct 3 at 21:00















$begingroup$
"Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
$endgroup$
– Livid
Oct 3 at 9:42




$begingroup$
"Matter and antimatter isn't created one at a time at random. To fix your simulation, you'd need to always create an electron at the same time you create a positron." That is what happens in hawking radiation, which my simulation models.
$endgroup$
– Livid
Oct 3 at 9:42




3




3




$begingroup$
"To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
$endgroup$
– MSalters
Oct 3 at 10:24




$begingroup$
"To explain the dominance of matter in the universe, you need something that prefers matter. ". I think you missed an essential part of the question. Whatever is left is called "matter". That's anthropocentric nomenclature. If by random chance the other sort of matter would have survived, that would have been called matter instead. There's no need for a preference, just symmetry breaking.
$endgroup$
– MSalters
Oct 3 at 10:24




1




1




$begingroup$
@MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
$endgroup$
– Chris
Oct 3 at 18:16




$begingroup$
@MSalters You need a preference to explain why the symmetry breaks the same way everywhere in the universe, rather than in random matter and antimatter patches.
$endgroup$
– Chris
Oct 3 at 18:16




1




1




$begingroup$
@Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
$endgroup$
– Chris
Oct 3 at 20:36




$begingroup$
@Livid There are parts of the universe that are causually disconnected from one another- i.e. they haven't had any chance to mix with one another since the Big Bang (or the end of inflation, in inflationary models). Anyway if you're positing that random chance caused the baryon imbalance we see today, there's no point in bringing up black holes at all- it's at least as likely to happen by random chance without black holes.
$endgroup$
– Chris
Oct 3 at 20:36




1




1




$begingroup$
@Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
$endgroup$
– Chris
Oct 3 at 21:00




$begingroup$
@Livid A local baryon imbalance on some scale is guaranteed just by random chance: i.e. the density of baryon number is not uniformly zero. Do the same random walk just with particles leaving and entering a volume of space and you'll get similar results.
$endgroup$
– Chris
Oct 3 at 21:00











Highly active question. Earn 10 reputation in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity.












Highly active question. Earn 10 reputation in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity.











Highly active question. Earn 10 reputation in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity.





Highly active question. Earn 10 reputation in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity.


Popular posts from this blog

Tamil (spriik) Luke uk diar | Nawigatjuun

Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

Training a classifier when some of the features are unknownWhy does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?How to improve an existing (trained) classifier?What is effect when I set up some self defined predisctor variables?Why Matlab neural network classification returns decimal values on prediction dataset?Fitting and transforming text data in training, testing, and validation setsHow to quantify the performance of the classifier (multi-class SVM) using the test data?How do I control for some patients providing multiple samples in my training data?Training and Test setTraining a convolutional neural network for image denoising in MatlabShouldn't an autoencoder with #(neurons in hidden layer) = #(neurons in input layer) be “perfect”?