Surveys are one of the most common ways organizations — employers, researchers, government agencies, and nonprofits — collect information about disability status. But asking about disability the right way matters more than most survey designers realize. Poorly worded questions produce bad data, create legal exposure, and can make respondents feel singled out or reduced to a label. This guide explains how disability questions work in survey design, what frameworks exist, and why the "right" approach depends heavily on who's asking, why they're asking, and what they plan to do with the answers.
Disability isn't a single, fixed category. It includes physical conditions, mental health diagnoses, cognitive differences, chronic illness, sensory impairments, and more. A person might identify strongly as disabled, meet a legal definition of disability, or qualify for SSDI — and those three things don't always overlap.
This creates an immediate problem for survey designers: how you define disability in the question shapes who gets counted. Ask about "limitations in daily activities" and you'll capture a different population than if you ask about "conditions that have lasted or are expected to last 12 months or more." Both are valid framings, but they measure different things.
For most general-purpose surveys — including federal data collection — the most widely adopted framework is the Washington Group Short Set on Functioning (WG-SS). Developed in partnership with the United Nations, this set of six questions focuses on functional difficulty rather than diagnosis or label.
The six domains it covers:
| Domain | Sample Question Framing |
|---|---|
| Vision | Difficulty seeing, even with glasses |
| Hearing | Difficulty hearing, even with hearing aids |
| Mobility | Difficulty walking or climbing stairs |
| Cognition | Difficulty remembering or concentrating |
| Self-care | Difficulty with self-care (washing, dressing) |
| Communication | Difficulty communicating or being understood |
Respondents typically rate difficulty on a scale: no difficulty, some difficulty, a lot of difficulty, or cannot do it at all. This approach avoids requiring a formal diagnosis, respects self-identification, and produces data that's comparable across populations and geographies.
If a survey is connected to disability accommodations, benefits eligibility, or compliance with laws like the Americans with Disabilities Act (ADA) or Section 503 of the Rehabilitation Act, the framing requirements shift significantly.
For workplace surveys tied to federal contractor compliance, the OFCCP's voluntary self-identification form uses specific language defining disability as a physical or mental impairment that substantially limits a major life activity — language drawn directly from the ADA. Using that exact language isn't optional; it's required.
For SSDI specifically, the SSA defines disability as the inability to engage in substantial gainful activity (SGA) due to a medically determinable impairment expected to last at least 12 months or result in death. That definition is narrower and more specific than everyday use of the word "disabled." A survey asking whether someone receives SSDI is asking a factual, administrative question — not a subjective one about functional ability.
Even well-intentioned surveys routinely get this wrong:
Conflating disability with illness. Asking "Do you have any serious illnesses or disabilities?" bundles two distinct concepts and produces unreliable data.
Binary yes/no framing. Disability exists on a spectrum and changes over time. A yes/no question forces respondents into categories that may not reflect their experience — and undercounts people with episodic or fluctuating conditions like MS, lupus, or mental health disorders.
Leading with diagnosis lists. Listing specific conditions ("Do you have diabetes, cancer, PTSD, or another disability?") introduces bias and may deter disclosure. It also implies that only listed conditions count.
Skipping a "prefer not to answer" option. Disability status is sensitive personal information. Omitting a decline option reduces response quality and can increase abandonment.
Forgetting about the purpose of the question. Every disability question in a survey should be tied to a clear reason. Researchers need to know: Are we measuring prevalence? Identifying accommodation needs? Tracking representation for compliance? The purpose determines the right question.
A survey question about disability captures self-identification — what a person chooses to disclose about themselves at a given moment. This is not the same as a formal disability determination.
A formal SSDI determination, for example, involves a review of medical records, work history, Residual Functional Capacity (RFC) assessments, Disability Determination Services (DDS) evaluation, and often multiple stages of review. A survey checkbox captures none of that complexity.
This distinction matters for anyone designing a survey that touches benefits, accommodations, or legal compliance: survey responses cannot substitute for formal eligibility determinations, and they shouldn't be treated as if they can.
The "right" disability question depends on:
A researcher at a university, an HR department at a federal contractor, a nonprofit assessing service gaps, and an SSA intake screener are all asking about disability — but they need different questions, different response scales, and different disclosures about how answers will be used.
What works well in one context produces misleading or legally problematic results in another. That gap between general best practice and your specific use case is exactly where thoughtful survey design — and sometimes legal or methodological consultation — becomes necessary.
