LSAY QuickStats


LSAY QuickStats provides quick and simple access to data about young people from the Longitudinal Surveys of Australian Youth (LSAY), and replaces the previous cohort reports. Data are presented as a series of tables and charts and includes information on education and employment pathways, as well as social indicators on living arrangements and satisfaction with life.

Data are organised by wave/year, beginning with the first wave of data collection (e.g. 1/2006) through to the final or most recent wave (e.g. 10/2015). For those interested in particular groups of young people, data can be filtered by a range of demographic variables.

To get the most out of LSAY QuickStats, please refer to Using LSAY QuickStats. Variable definitions can be found in the Glossary.

For information on how to access the confidentialised unit record data see How to access LSAY data.

If you have any feedback, suggestions or requests, please email

The LSAY cohorts

LSAY respondents come from a nationally representative sample of young people. Survey participants (collectively known as a ‘cohort’) enter the study at age 15 years (or as was the case in the earlier cohorts, when they were in Year 9). Individuals are contacted once a year until they are 25 years old.

Studies began in 1995 (Y95 cohort), 1998 (Y98 cohort), 2003 (Y03 cohort), 2006 (Y06 cohort), 2009 (Y09 cohort) and more recently in 2015 (Y15). More than 10,000 students start out in each cohort.

Note: Upon entering LSAY QuickStats for the first time, users may be prompted to log in by their browser or by SAS Visual Analytics. In such instances, selecting 'cancel' or 'Log in as Guest' will continue to the product.

Previous summary reports describing the activity of each cohort up to the 2005 waves can be accessed by going to cohort summaries.


LSAY QuickStats replaces the previous cohort reports and largely presents the same information, with a few exceptions.

What's new?
LSAY QuickStats has a number of features:

  • charts graphically represent change over time - mouse over for detailed information
  • filter on up to 11 variables simultaneously
  • tables and charts update automatically based on user selections
  • all data for a given cohort conveniently contained in a single report
  • weighted number of respondents provided for each estimate
  • socioeconomic status now available.

What's changed?
Compared with the previous cohort reports, LSAY QuickStats:

  • uses sample sizes to report on the reliability of the estimates, rather than relative standard errors
  • for variables with large numbers of categories, some categories have been combined in order to simplify the data
  • can no longer export the data to Excel.

It is anticipated that the ability to export data from LSAY QuickStats to Excel or other applications will be made available in a future release. For assistance with printing, please see Using LSAY QuickStats.


Sample selection

Each LSAY cohort consists of a sample of young Australians and follows them annually from when they are 15 years old (or in Year 9 as was the case for Y95 and Y98 cohorts) for 10 years. Data for the first wave are collected through a combination of school achievement tests and a questionnaire administered at school. Data for subsequent waves are gathered through annual telephone or online interviews.  Since 2003, the initial survey wave has been integrated with the OECD triennial Programme for International Student Assessment (PISA). Over 10,000 students start out in each cohort. Respondents can miss more than one non-consecutive interview and still remain in the survey.

Due to population shifts over time and survey attrition, care needs to be taken when comparing individual cohorts against other samples which have been drawn from different populations. For example, it can be misleading to compare the Y03 cohort at wave 3 in 2005 (who are, on average 17 years old) against 17-year-olds from other surveys in the same year.

Information about the LSAY sample design can be found in the user guide series.

LSAY weightings

Survey responses are weighted to population benchmarks to account for the survey being undertaken as a sample rather than the entire target population.

Two weighting procedures are applied to the LSAY data presented: sample weights and attrition weights.

Sample weights

Sample weights reflect the original sample design and ensure that the sample matches the population from which it was drawn.

For each cohort, post-stratification weights are applied to adjust for sample selection procedures that allowed for oversampling in smaller states and territories. For example, students from states and territories with smaller numbers of Year 9 students are over-sampled and students from states with larger numbers of Year 9 students are under-sampled. In order for the sample to more accurately represent the population of Australian Year 9 students, the sample is weighted so that sample sizes within strata are proportionate to the population sizes of the strata.

In the case of the Y95 and Y98 cohorts, the sample weights sum to the sample size for each wave. From Y03 onwards, the weighting strategy was improved and the weights sum to the original population from which the sample was drawn in wave 1.

Attrition weights

Attrition weights account for non-random respondent attrition. As expected, sample attrition does not occur uniformly across social groups. Attrition weights account for different groups of people dropping out of the survey at different rates.

In the case of the Y95 and Y98 cohorts, LSAY attrition weights are based on overall achievement quartiles and gender, and reweight to wave one.

From Y03 onwards, instead of relying on overall achievement quartiles and gender as the main predictors of attrition, a non-response analysis was undertaken to determine the factors that contributed to attrition. These factors were then used to calculate attrition weights for both the attrition from PISA to LSAY, and wave-on-wave attrition.

Final weights

The final LSAY weights for each wave combine the sampling and attrition weights. Despite attempts to counteract attrition bias, users must be aware that survey drop out may not be fully accounted for in the attrition weights for all sub-populations. To allow users to determine the effectiveness of the attrition weights, data in the demographic tables are presented both weighted and unweighted.

More information about the weighting procedures used can be found in the LSAY user guide series.

Reliability estimates

The standard error is widely used when considering the reliability of an estimate. The greatest contributor to standard error is the sample size. Small sample sizes result in high standard errors and wide confidence intervals therefore reducing the reliability of an estimate.  To account for the reliability of the data, estimates obtained from sample sizes of less than 20 respondents are highlighted in red text, and their weighted sample size is suppressed. This is particularly important when applying filters to the tables.

Rounding policy

NCVER's policy is that all data contained in tables, including table totals, are based on the underlying data. Therefore in some instances the sum of the rounded data may not equal the table totals.