
Every first Friday of the month, the U.S. Bureau of Labor Statistics (BLS) releases its Employment Situation Summary, a report eagerly awaited and immediately dissected by financial markets, policymakers, and the broader public. This report contains the headline “jobs number,” a critical indicator that offers the earliest possible glimpse into the health of the American labor market. Yet, what many may not fully grasp is that these initially reported figures are not static; over the following weeks, months, and even years, these numbers undergo a series of planned updates.
While revisions sometimes sow confusion or even spark suspicion, hinting at perceived inaccuracies or manipulations, for those intimately familiar with the intricate process, these adjustments are far from mistakes. They are, in fact, an intentional and transparent cornerstone of a sophisticated statistical methodology. This system is meticulously designed to achieve a delicate balance: providing timely economic information while simultaneously striving for the greatest possible accuracy in its comprehensive assessment of the U.S. labor market.
Consider the initial release as merely the first draft of economic history—a rapid, yet necessarily incomplete, snapshot derived from preliminary data. Subsequent revisions then layer on detail and clarity, gradually building a progressively clearer and more precise picture of employment trends across the nation. This dynamic process, essential to the integrity of the data, is structured across four primary layers: monthly updates incorporating additional survey responses, comprehensive annual benchmark revisions, routine seasonal adjustments, and periodic updates to population estimates.

The foundation of the monthly jobs report rests upon the integration of data from two distinct, large-scale surveys, each boasting its own unique methodology, scope, and specific purpose. While their differing approaches can occasionally present seemingly conflicting short-term results, this dual-survey system actually provides a far more robust and nuanced understanding of the labor market than any single source could possibly offer. For instance, one survey might reveal robust growth in company payrolls, while the other simultaneously highlights a weakness in self-employment, thereby furnishing policymakers with a more complete and multifaceted view of prevailing economic currents.
The first of these critical instruments is the Current Employment Statistics (CES) survey, widely recognized as the “establishment” or “payroll” survey. It is the source of the number most frequently quoted in headlines—the change in nonfarm payroll employment. Conducted as a federal-state cooperative program, the CES is a monthly survey that rigorously gathers data from approximately 121,000 businesses and government agencies, collectively representing about 631,000 individual worksites across the nation. This extensive reach allows the survey to estimate the number of nonfarm jobs on business and government payrolls, while also collecting vital data on average weekly hours worked and average hourly earnings. Crucially, because it counts jobs at the “place of work,” it accurately reflects employment within a specific geographic area, irrespective of where individual workers reside. The CES counts any full-time or part-time employees who received pay for any part of the pay period that includes the 12th of the month, meaning that a person holding two jobs at two different companies would be counted twice. However, it specifically excludes self-employed individuals, unpaid family workers, agricultural workers, and private household employees such as nannies or housekeepers.

Complementing the CES is the Current Population Survey (CPS), often referred to as the “household” survey, which is the origin of the official unemployment rate. This monthly survey, conducted by the U.S. Census Bureau on behalf of the BLS, encompasses about 60,000 eligible U.S. households, meticulously selected to represent the entire country. The CPS meticulously gathers data on the labor force status of the civilian noninstitutional population aged 16 and over, classifying individuals as employed, unemployed, or not in the labor force based on their activities during a specific reference week. This robust dataset is then used to calculate the unemployment rate, labor force participation rate, and employment-population ratio. Unlike the CES, the CPS counts people, not jobs; an individual with multiple jobs is counted only once as “employed.” Its scope is notably broader, encompassing groups excluded by the establishment survey, including self-employed individuals, agricultural workers, and unpaid family workers who contribute at least 15 hours to a family business. Furthermore, the survey provides invaluable demographic data, enabling detailed analysis of employment trends by age, sex, race, and ethnicity, though it does exclude individuals in the Armed Forces and those residing in institutions like prisons or nursing homes.
The most frequent and publicly visible revisions pertain to the monthly updates of the CES payroll numbers, adjustments that occur in the two months immediately following the initial release. These revisions are a direct consequence of the practical realities inherent in large-scale data collection. Policymakers, financial markets, and the general public invariably demand the earliest possible insights into the economy’s condition. To satisfy this urgent demand, the BLS issues its first estimate with remarkable speed; the Employment Situation report typically surfaces on the first Friday of each month, presenting data from just a few weeks prior.
:max_bytes(150000):strip_icc()/BureauofLaborStatistics-0a868acbc1dd4b67a4f0dcfcb2bf040b.jpg)
However, this impressive speed comes with an unavoidable trade-off. At the precise moment of the first preliminary estimate, the BLS has not yet received survey responses from all businesses included in its vast sample. Historically, the average collection rate for the initial release hovers around 73 percent. To still produce a headline number that serves as a timely indicator, the BLS leverages the available data and employs a reasoned assumption: that employment trends among businesses yet to report are generally similar to those that have already submitted their data. This makes the first release a quick, yet admittedly lower-resolution, snapshot of the job market.
Over the subsequent two months, BLS staff diligently continue collecting responses from the remaining businesses within the sample. This influx of additional information is then meticulously incorporated into two subsequent releases. The Second Preliminary Estimate is issued one month after the initial report, integrating a larger pool of data. Following this, the Third and Final Sample-Based Estimate is released two months after the initial report, by which point the collection rate typically climbs to approximately 95 percent, providing a significantly more complete and, therefore, more accurate picture of employment trends. Monthly revisions arise when the employment data submitted by late-responding firms diverges from the data initially provided by early responders. Should businesses that submitted data later exhibit weaker job growth or a higher incidence of layoffs than those that reported early, the initial jobs number will be revised downward. Conversely, if late responders demonstrate stronger growth, the initial estimate will be revised upward. This systematic process is not only standard but also anticipated, engineered to progressively enhance data accuracy as more comprehensive information becomes available.

A substantial contemporary challenge confronting the BLS is the discernible decline in survey response rates from businesses over the past decade. This trend has direct ramifications for public perception of the jobs report. As fewer businesses manage to respond by the initial reporting deadline, the first estimate necessarily becomes more reliant on sophisticated statistical modeling and careful assumptions about non-responders. Consequently, this increases the likelihood of larger discrepancies between the initial estimate and the more complete data that arrives later. While such larger revisions are statistically explicable and part of a robust methodology, they can unfortunately be more easily misconstrued, or even politically framed, as “errors” or “incompetence.”
A particularly clear illustration of this phenomenon occurred with the data for May and June 2025, which experienced unusually large downward revisions. As detailed in the July 2025 Employment Situation Summary, the initial estimate for May 2025, which reported 144,000 jobs gained, was subsequently revised down by a substantial 125,000 jobs, culminating in a final sample-based estimate of just 19,000 jobs. Similarly, the initial estimate for June 2025, which indicated 147,000 jobs gained, was revised down by 133,000 jobs, settling at a final sample-based estimate of merely 14,000 jobs. Cumulatively, these revisions signified that employment growth in May and June was 258,000 jobs lower than initially reported, presenting a significantly weaker portrayal of the economy’s momentum than initial headlines had suggested. Historical data from 2012 further contextualizes this, showing an average collection rate for the first release at 73.1 percent, compared to 94.6 percent for the third estimate. While the average monthly employment change from the first estimate was +142,000 in 2012, it rose to +165,000 with the third estimate, demonstrating the typical upward revision that occurs, though significant downward revisions, like that for April 2012 (from 115,000 to 68,000), also underscore the dynamic nature of these numbers.

Beyond the monthly refinements based on a more complete sample, the most comprehensive correction to employment data originates from the annual “benchmark” revision. This process is truly transformative, as it moves entirely beyond survey data, aligning estimates with a near-complete count of American jobs. The bedrock of the benchmark revision is the Quarterly Census of Employment and Wages (QCEW) program. In stark contrast to the CES, the QCEW is not a sample survey; it is a comprehensive census derived from mandatory unemployment insurance tax records that virtually all employers are required to file with their state workforce agencies four times annually. Because it is founded on these compulsory tax filings, the QCEW encompasses approximately 97 percent of all nonfarm payroll jobs in the United States, earning it the esteemed status of the gold standard for employment counts. Its primary limitation, however, lies in its timeliness, as this exhaustive data only becomes available with a five to six month lag.
Once a year, the BLS undertakes the benchmark revision to rigorously re-anchor its monthly CES survey estimates to the unequivocally more accurate QCEW data. This process commences by establishing a benchmark point: the BLS utilizes comprehensive QCEW data to ascertain a highly accurate count of total nonfarm employment for March of the preceding year. Subsequently, the revision is calculated by comparing this March QCEW count to the sample-based CES estimate for the very same month; the difference between these two figures constitutes the total benchmark revision, serving as a critical measure of the overall accuracy of the monthly survey over the past year. Finally, the historical series is adjusted. The BLS operates on the assumption that this total error accumulated steadily over the prior year, employing a “linear wedge-back” procedure to smoothly distribute the revision across the 11 months leading up to the March benchmark. Data for the nine months following the March benchmark also undergo revision, using the newly established, more accurate March level as their foundational starting point. This annual recalibration is essential for correcting any sampling error or modeling discrepancies that may have gradually infiltrated the monthly estimates over time.
A particularly crucial aspect of the monthly CES survey that the benchmark revision addresses is its accounting for the inherent dynamism of the U.S. economy. There exists an unavoidable lag between the moment a new business commences operations and when it officially appears on the lists from which the survey sample is drawn. Similarly, businesses that cease operations may not be immediately removed from the sample. To compensate for this, the BLS employs a sophisticated statistical model known as the “birth-death model,” which estimates the net effect of job creation stemming from new businesses and job loss resulting from closing businesses each month. This model is meticulously constructed based on historical data extracted from the QCEW. The annual benchmark process provides a vital and independent check on the performance of this model; by comparing the model’s estimates to actual job changes precisely captured in the comprehensive QCEW data, the BLS can accurately measure any existing error and recalibrate the model for the forthcoming year, ensuring its ongoing relevance and precision.

The magnitude and direction of the annual benchmark revision can also serve as a powerful harbinger of major turning points within the economy, especially those that monthly survey models failed to capture in real time. Because the birth-death model relies heavily on historical trends to project future job gains from new firms, economic inflection points—such as the onset of a recession or the beginning of a robust recovery—represent a sharp departure from past patterns. During such pivotal periods, the model is most susceptible to inaccuracies. For instance, as an economy begins to contract, the model may inadvertently continue projecting job creation based on recent periods of growth, even as new firm creation slows and closures accelerate. This systemic overestimation of jobs in monthly reports is eventually revealed months later by the QCEW data, which is grounded in actual tax filings. The typical outcome is a significant, often negative, benchmark revision that retroactively corrects the historical record. A stark example of this was the revision for March 2009, during the depths of the Great Recession, which retroactively erased a staggering 902,000 jobs from initial estimates. Similarly, the large preliminary downward revision announced for March 2024, at -818,000 jobs, was consistent with a period of slowing employment growth that monthly models had not yet fully captured. This makes the benchmark revision an indispensable economic signal in its own right.
The benchmark revision process adheres to a predictable annual schedule. In late August or early September, the BLS issues a “preliminary benchmark revision” announcement, providing data users with an initial glimpse into the likely size and direction of the forthcoming revision for the previous March. The final, official benchmark revision is then seamlessly incorporated into the full historical dataset and released in early February of the subsequent year, concurrent with the publication of the January Employment Situation report. Observing how a single data point, such as the employment level for March 2024, evolves over time illustrates this transparency: initial release in early April 2024 might show +250,000 jobs; the second estimate in early May revises it to +230,000; the final sample-based estimate in early June adjusts it to +235,000. Then, the preliminary benchmark announcement on August 21, 2024, reveals a -818,000 revision for March 2024, culminating in February 2025 with a final downward revision of 598,000, aligning all data from April 2023 to December 2024 with the comprehensive QCEW counts.