Query ID: acsBasetbls Coded: October, 2010

Author: John Blodgett, OSEDA/MCDC

Summary: In this example we demonstrate how to access the detailed Base tables data (these are the tables based upon the "summary files") from the American Community Survey. The example is based upon a real-life application that we did in 2010 for a public agency in Missouri with only very slight variations (to keep things relatively simple). Here is the problem statement/user request: We are interested in knowing approximately how many children in the state of Missouri are eligible for the WIC (Women, Infants and Children) program. To be eligible the child must be 5 years old or younger and have a poverty ratio of less than 1.85 (the same income criteria as used for the Free and Reduced Lunch program). We would love to have these data by county but we understand that if we want more current data from the ACS that we cannot get it for the many smaller counties in the state. We are therefore willing to take the data summarized at the state and PUMA levels. We understand that there is very current ACS data at the PUMA level released each year, and we are familiar with the strong relationship between counties and PUMAs in the state (see report link in Inputs: section, just below).

Data Set Accessed: acs2009.basetbls.ustabs17_20

Inputs: Dexter Query Form (live and modifiable!) | Postprocessing SAS program | MCDC report relating PUMAs and counties (for the entire US - use index to access Missouri part)

Outputs: Dexter Summary log | csv (comma-delimited) | html report | SAS data set | Postprocessing report (using SAS)

What You Need to Know (Broad, Common Knowledge)

You need to know the basics about the American Community Survey, which is no trivial thing for a total newbie. You also need to have some idea of what poverty ratios are about. As a practical matter, anyone who would propose the problem would need to have such familiarity. But others might want to do some reading in the Bureau's 2009 ACS Subject Definitions document starting on page 100 (related to Poverty Status).

A Brief Summary of What We Are Going to Do

Unlike earlier xsample modules in the 2010 series we are not going to take you through the detailed steps that we followed in order to extract the requested information using the Uexplore and Dexter access tools. At least not in the same way. We are replacing html text with video moduled that we hope will deliver the same information but hopefully in a somewhat easier to digest format. Here is a bare-bones outline of what you can expect to see in the videos:

  1. We shall start at the Uexplore/Dexter (Data Archive) home page and from there navigate our way quickly to the acs2009 filetype data directory.

  2. We'll proceed to the basetbls subdirectory since that is where the detailed, or "base", tables are stored. We'll look at a series of metadata files in that directory to help us determine where to find the data we are looking for.

  3. We'll select the data set containing the poverty-related tables for access via Dexter.

  4. Within Dexter we'll opt to keep all 3 types of output. We'll code a filter indicating state of Missouri with type of geography being either state or PUMA.

  5. We'll choose our ID variables and the relevant table, identified as the one with the data we need (B17024). But we'll also type in the list of variables to keep which overrides what we have selected off the Part III select lists.

  6. We'll run the xsample and review the various outputs.

  7. We'll illustrate how to tweak the query form in order to generate a parallel extract containing margin-off-error data.

  8. We'll proceed to a postprocessing phase where we access the extracted SAS data sets with a SAS program.

  9. Yes, we meant to say "SAS data sets" (plural). Our postprocessing will merge data from the two extracts (estimates and MOEs) and will illustrate caclulation of a sum and a ratio and of MOEs for each.

  10. The video presentation will be split into 3 parts. Part 1 is about navigating to the data set and understanding how things are organized and documented. Part 2 is the Dexter extraction phase (estimates data). Part 3 begins with a quick demonstration of extracting margin-of-error data from a parallel data set, followed by an overview of the postprocessing phase where we code a SAS(r) program that we used to generate the final results (html report) as requested by the client.

Access the video modules

(all videos will open in a new window.)

Questions, Exercises

  1. Do the "parallel extraction" on the ustabs17_20_moes data set to produce the SAS data set referenced in the postprocess.sas program.
  2. Modify the setup (start with the saved query form in the upper-right window) to do a comparable report for your state and using persons over the age of 65.

  3. Modify the extract to obtain data for state totals and counties in California.

  4. Modify the query to extract only a csv file for Illinois cities with data for all of tables b17002 and b17003.