I think the way to look at it is... as the collider runs over time,
the accumulated area of the particles increases.
This value needs to be corrected for the inefficiency of each specific
collider in detecting 100% of the particles (fudge factor=luminosity)
I don't really know what I just said...

and
maybe someone can offer a better explanation.
But, in the meantime, as quoted directly from Wikipedia
Quote:
The "inverse femtobarn" (fb−1) is a measurement of particle collision events per femtobarn.
One inverse femtobarn is equal to around 70 million million (70 x 1012) collisions.
Over a period of time, two streams of particles with a cross-sectional area, measured in femtobarns, are directed to collide.
The total number of collisions is directly proportional to the luminosity
of the collisions measured over this time.
Therefore, the collision count can be calculated by multiplying the integrated luminosity
by the sum of the cross-section for those collision processes.
This count is then expressed as inverse femtobarns for the time period (e.g., 100 fb−1 in nine months).
Inverse femtobarns are often quoted as an indication of particle collider effectiveness.
Fermilab has produced 10 fb−1 in the last decade.
Fermilab's Tevatron took about 4 years to reach 1 fb−1 in 2005,
while the Large Hadron Collider experiments ATLAS and CMS reached
over 5 inverse femtobarns of proton-proton data in 2011 alone.
Usage example
As a simplified example, if a beamline runs for 8 hours (28,800 seconds)
at an instantaneous luminosity of 300 × 1030 cm−2s−1 = 300 μb−1s−1,
then it will gather data totaling an integrated luminosity of 8,640,000 μb−1 = 8.64 pb−1 during this period.
By next year, collisions will be occurring – if all continues to go well
– at a rate producing what physicists call one "inverse femtobarn,"
best described as a colossal amount of information for analysts to ponder.
|
Therefore, it naturally follows in todays news that
they will also need a much bigger computer.

.