The percent of Arizona residents 25-64 years of age who have completed a 2- or 4-year degree or who have an active professional certificate or license.
Important note about 2020 data
Attainment data are derived from the American Community Survey (ACS). The 2020 ACS was impacted by the COVID-19 pandemic and other conditions that prevented the normal collection and interpretation of Census data. Abnormal variation in the data could not be satisfactorily corrected with experimental weightings. The Education Progress Meter will continue to utilize the 2019 data until the data issues have been resolved. We will continue to monitor the situation and will remain committed to using the most accurate and reliable data available.
For more information read this post by the US Census Bureau.
Arizona residents age 25-64 who have two-year, four-year, or advanced degrees from public or private institutions or who have an active professional certificate or license.
Alternatively, for county data and demographic subgroups, this measure includes only those with Associate’s degrees or higher as data for professional certificates or licenses are not available due to the limited sample size.
Those who have never had post high-school education or have attended but earned neither a degree or non-degree certificate are not included. Also excluded are people under age 25, many of whom are still working on their education. Those age 65 and over, many of whom are retired, are also excluded. Those living in group quarters are excluded from poverty measures because their income is not calculated for the poverty statistic.
The Attainment goal contains two data elements: one for adults with at least an Associate’s degree, and another for adults who hold an active professional certificate or license.
For the first element, PUMS data for 2019 were filtered to include only persons aged 25 to 64. The Educational Attainment variable was collapsed from 24 categories down to two, those with at least an Associate’s degree and those without. The percentage of those with at-least an Associate’s degree was then calculated for the race/ethnicity categories, and status for English proficiency, poverty, and disability. Results that had excessive margins of error were removed from the final table.
There is no generally accepted local, state, or national data source that counts adults with non-academic professional credentials. Data from the Current Population Survey (CPS) were used to estimate certifications and licenses using an average of their 2019 monthly data for all 12 months. After selecting the Arizona population age 25 to 64, those possessing academic degrees (Associate’s degree and above) were filtered from the data. This number was then added to the academic credential number obtained from the PUMS data.
The Current Population Survey (CPS) is a survey of about 60,000 households conducted monthly by the Census Bureau on behalf of Bureau of Labor Statistics. Although this survey is primarily designed to track employment trends, it also collects data on demographics, educational attainment and more.
Data from the CPS was retrieved from https://cps.ipums.org/cps/ , a service that aggregates and formats data from CPS. Variables downloaded included age, race, Hispanic status, educational attainment, the presence of professional certification, and county of residence. Because the CPS uses a much smaller sample than the American Community Survey that supplies the PUMS data, only a few counties in Arizona are identified in the data.
As with the PUMS data, the CPS data was filtered to leave only respondents age 25-64. Non-academic attainment was determined by identifying those that possess an “active professional certification or license,” but who do not have an academic degree. This avoids double-counting those such as doctors that hold both an academic degree and a professional license.
Some thought was put into the validity of combining the PUMS and CPS datasets to produce a single number. It is possible to estimate both academic and non-academic attainment using the CPS data alone. However, two factors suggest that using PUMS data for the academic portion of the final number is preferable:
There was a remaining concern about combining these data, however. The academic attainment numbers derived from the PUMS data and the CPS data don’t quite match, which suggested there might be problems in combining the two sources. To resolve this conflict, estimated standard errors were calculated for the two academic attainment figures. This resolved the conflict by showing that there is no statistically significant difference between the two estimates. Since the Census estimate of academic attainment is more accurate due to its larger sample, it is the most appropriate source for that portion of the overall attainment number. The non-academic portion, derived from CPS, has less accuracy but is still the best currently available estimate.
Two of the indicators, attainment and opportunity youth, are drawn from Public-Use Microdata Sample (PUMS) data from the United States Census Bureau. PUMS data is a product of the Bureau’s American Community Survey (ACS), which is conducted annually and collects a wide variety of data from households across the nation.
PUMS data is available in samples that have been collected over a five-year period or over a single year. The five-year sample is more accurate, but since the Progress Meter is looking for changes across time, the one-year sample is more appropriate for this use.
The one-year sample for 2019 for Arizona was downloaded from the Census website (http://www.census.gov/programs-surveys/acs/data/pums.html ).The data was then imported into SPSS, a statistical software package. Using SPSS, summary variables were created for race and ethnicity, age categories, limited English proficiency (LEP), poverty status, work status, school attendance, educational attainment, disability status, and county of residence. Note that several counties with smaller populations are combined in the PUMS data to protect the privacy of survey respondents. The PUMS data is so detailed that it would be possible to identify individual people or families if the data were focused on a smaller geography. Populous counties are big enough that individual records are effectively masked, but data from smaller counties such as Mohave and La Paz are combined to create a larger population pool and protect identities.
An automated script file was then developed to produce the tables used by the progress meter. The tables contain the variable of interest broken down by the ten county-comparable geographies reported by PUMS, race and ethnicity, limited English proficiency (LEP), poverty and disability status.
These tables were then transferred to Microsoft Excel for further formatting, calculation of percentages, analysis of standard errors, and computation of 90% confidence intervals. Standard errors for the estimates and the derived proportions were calculated according to the formulas suggested by the census bureau (http://www2.census.gov/programs-surveys/acs/tech_docs/pums/accuracy/2016AccuracyPUMS.pdf). These calculations consider the size of the estimates, the size of the population from which the estimates were drawn, and the design factors used by census bureau.
Values in the final Excel output tables were suppressed in cases where the 90 percent confidence interval exceeded +/- 25 percent or when the confidence interval encompasses either 0% or 100%.
Attainment data are products of the American Community Survey conducted by the Census Bureau. Since this data is drawn by sampling a small percentage of the overall population, there is a degree of uncertainty to the numbers.
Rather than seeing these numbers as point descriptors of exactly the percent of adults with college degrees, for example, it is more accurate to visualize them as the center of a 90% confidence interval. Were it possible to interview everyone in Arizona, there is a 90% chance that the ‘true’ percentage would fall within this confidence interval.
This uncertainty is known as sampling error. It is an unavoidable consequence of the survey process. The size of the confidence interval is expressed by the standard error of the estimate, which is used to monitor the quality of the estimate.
Inevitably, other errors creep into the data. Random errors, such as a respondent accidentally checking the wrong box on a survey form, do not bias the data in one direction or another, but do affect the precision of the estimate by increasing the standard error.
Systematic errors unintentionally push the data in a specific direction, perhaps through a poorly worded question, can be a serious concern. Both the Census Bureau and the Bureau of Labor Statistics conduct rigorous, high-quality surveys that reduce systematic errors to a minimum.