Data Carpentry Workshop

Are you looking for:

  • better ways to organize spreadsheet data
  • tools to speed up cleaning up tabular (spreadsheet) data
  • an alternative to commercial statistics software (R)
  • how to create data visualizations in R
  • using relational databases to manage data

If any of these are of interest to you, then a Data Carpentry workshop may be what you are looking for. Edward Davis (Geology) and the UO Libraries are hosting a a remote broadcast of a session taking place at University of California Museum of Paleontology this week.

What: Data Carpentry workshop
When: this week March 3 – 4, from 8 to 4 pm each day
Where: Knight Library (limited seating; registration required). This will be a remote broadcast of a session taking place at University of California Museum of Paleontology.  Assistants will be available to provide onsite support at the University of Oregon.


Data Carpentry workshops are for any researcher who has data they want to analyze, and no prior computational experience is required. This hands-on workshop teaches basic concepts, skills and tools for working more effectively with data. We will cover data organization in spreadsheets, data cleaning, SQL, and R for data analysis and visualization. Participants should bring their laptops and plan to participate actively. By the end of the workshop learners should be able to more effectively manage and analyze data and be able to apply the tools and approaches directly to their ongoing research.

More about Data Carpentry

In many domains of research the rapid generation of large amounts of data is fundamentally changing how research is done. The deluge of data presents great opportunities, but also many challenges in managing, analyzing and sharing data.

Data Carpentry is designed to teach basic concepts, skills and tools for working more effectively with data.  The workshop is aimed at researchers at all career stages and is designed for learners with little to no prior knowledge of programming, shell scripting, or command line tools.

More information on the workshop:

Local contact: Prof. Edward Davis (

Posted in Best practices, Data cleanup, Data visualization, Research data management (RDM), Statistics, Tips & Tricks, Workshops & Events | Leave a comment

Think Big-Transforming, Extending, Reusing Data

This is Love Your Data week, and each day we’ll be sharing a post about one or more fundamental data management practices that you can use. Part 5 of 5 (parts 1, 2, 3, 4)


While best practices for sharing your data are still evolving, there are some things to keep in mind when choosing to share your data:

  • When archiving your data choose an appropriate venue for your discipline. If you have any questions about choosing an appropriate data archive, contact your librarian.
  • Share ethically. Make certain that all sensitive information is redacted before submitting your data to an appropriate archive.
  • When sharing your data, include the metadata. Metadata, in part, documents your data. It tells others about your data: how it was created, who created it, and potentially, any stipulations for use of the data. For more information about metadata, consult UO Libraries page on Metadata & Data Documentation.
  • Before depositing your data be aware of any associated intellectual property rights. While copyright is not applicable to most research data in the U.S., licensing can apply. Want to learn more? Check out this guide from the University of Minnesota Libraries for a more thorough explanation of intellectual property, licenses, and research data.

Need more information? Make sure to consult the UO Libraries RDM page on Sharing Data.


What will future generations do with your data? How will it change the world? Think about ways in which your data can be used by scholars, change-makers, and everyday citizens to make a difference in the world.


How do you share you data? How do you make it accessible and intelligible for future users? What are some of your concerns about sharing data? How can we make sharing data easier for data producers? And of course, what would make reusing data easier for all levels of consumers out there?

Twitter: #LYD16 Instagram: #LYD16 Facebook:#LYD16


For additional information check out the resources board, the changing face of data on Pinterest, and consult the with the UO Libraries Research Data Management page on Sharing Data

Source:materials adapted from LYD website


Posted in Data News | Leave a comment

Respect your data – give & get credit

This is ‘Love Your Data‘ week, and each day we’ll be sharing a post about one or more fundamental data management practices that you can use. Part 4 of 5. Parts 1, 2, 3, 4, and 5

Data are becoming valued scholarly products instead of a byproduct of the research process. Federal funding agencies and publishers are encouraging, and sometimes requiring, researchers to share data that have been created with public funds. The benefit to researchers is that sharing your data can increase the impact of your work, lead to new collaborations or projects, enables verification of your published results, provides credit to you as the creator, and provides great resources for education and training. Data sharing also benefits the greater scientific community, funders,the public by encouraging scientific inquiry and debate, increases transparency, reduces the cost of duplicating data, and enables informed public policy.

There are many ways to comply with these requirements – talk to your local librarian to figure out how, where, and when to share your data.


  • Share your data upon publication.
  • Share your data in an open, accessible, and machine readable format (e.g., csv vs. xlsx, odf vs. docx, etc.)
  • Deposit your data in a subject repository or our institutional repository so your colleagues can find and use it.
  • Deposit your data in the UO repository (Scholars’ Bank) to enable long term preservation.
  • License your data so people know what they can do with it.
  • Tell people how to cite your data.
  • When choosing a repository, ask about the support for tracking its use. Do they provide a handle or DOI? Can you see how many views and downloads? Is it indexed by Google, Google Scholar, the Data Citation Index?


  • “Data available upon request” is NOT sharing the data.
  • Sharing data via PDF files.
  • Sharing raw data if the publication doesn’t provide sufficient detail to replicate your results.


Take the plunge and share some of your data today! Check out our information on data sharing, or the list of resources below, or contact us to get started.

If your data are not quite ready to go public, go check out the ones listed below under Resources, or this list of repositories and see what kinds of data are already being shared.

If you have used someone else’s data, make sure you are giving them credit. Check out our information on how to cite data, or look at these resources:

Tell Us

How was the deposit process? Easier or harder than you expected?
What do you need to do before you can share your data?
What do you like or dislike about the repository?
Are people sharing data that is similar to yours?

Twitter: #LYD16
Instagram: #LYD16
Facebook: #LYD16


See the guidelines on the UO Research Data Management pages
Contact us if you have questions.
Check out the resource board & the changing face of data on Pinterest

Posted in Best practices, Data centers & repositories, Data citation, Permissions, Tips & Tricks | Tagged | 1 Comment

Help Your Future Self – Write it Down!

This is ‘Love Your Data‘ week, and each day we’ll be sharing a post about one or more fundamental data management practices that you can use. Part 3 of 5. Parts 1, 2, 3, 4, and 5

(Click on picture to enlarge)


Think about your future self: Document, document, document! You probably won’t remember that weird thing that happened yesterday unless you write it down. Your documentation provides crucial context for your data. So whatever your preferred method of record keeping is, today is the day to make it a little bit better!


Data documentation or metadata is essential to sharing your data with other researchers or your future self.

One form of data documentation is a readme file. Here are some basic best practices (courtesy of Cornell University) for readme files:

  • Create one readme file for each data file, whenever possible. It is also appropriate to describe a “dataset” that has multiple, related, identically formatted files, or files that are logically grouped together for use (e.g. a collection of Matlab scripts). When appropriate, also describe the file structure that holds the related data files (see Example 2 in this PDF).
  • Name the readme so that it is easily associated with the data file(s) it describes.
  • Write your readme document as a plain text file, avoiding proprietary formats such as MS Word whenever possible. Format the readme document so it is easy to understand (e.g. separate important pieces of information with blank lines, rather than having all the information in one long paragraph).
  • Format multiple readme files identically. Present the information in the same order, using the same terminology.
  • Use standardized date formats. Suggested format: W3C/ISO 8601 date standard, which specifies the international standard notation of YYYYMMDD or YYYYMMDDThhmmss.
  • Follow the conventions for your discipline for taxonomic, geospatial and geologic names and keywords. Whenever possible, use terms from standardized taxonomies and vocabularies.

Today’s Activity:

Using the guidelines and examples in Cornell’s pdf guide, write your own readme file and share it:

Twitter: #LYD16
Instagram: #LYD16
Facebook: #LYD16


Check out the resource board & the changing face of data on Pinterest
Talk to us if you have questions

Source: materials adapted from LYD website.

Posted in Data News | Leave a comment

It’s the 21st Century — Do you know where your data is?

This is ‘Love Your Data‘ week, and each day we’ll be sharing a post about one or more fundamental data management practices that you can use. Part 2 of 5. Parts 1, 2, 3, 4, and 5


Have a plan for organizing your data. This usually includes a folder structure and file naming scheme (plan), and version control to keep track of file changes. Make these a part of your research process and they will become good habits. Check out the tips below!

You want to avoid this problem:phd052810s_storyInFileNames
want to see more? Google “bad file names” and browse through the images for laughs


File Naming: If you don’t already have a file naming plan and folder structure, come up with one and share it. See our list of good practices for naming files also summarized here:

  • Be Clear, Concise, Consistent, and Correct
  • Make it meaningful (to you and anyone else who is working on the project)
  • Provide context so it will still be a unique file and people will be able to recognize what it is if moved to another location.
  • For sequential numbering, use leading zeros.
    • For example, a sequence of 1-10 should be numbered 01-10; a sequence of 1-100 should be numbered 001-010-100.
  • Do not use special characters: & , * % # ; * ( ) ! @$ ^ ~ ‘ { } [ ] ? < >
    • Some people like to use a dash ( – ) to separate words
    • Others like to separate words by capitalizing the first letter of each (e.g., DST_FileNamingScheme_20151216)
  • Dates should be formatted like this: YYYYMMDD (e.g., 20150209)
    • Put dates at the beginning or the end of your files, not in the middle, to make it easy to sort files by name
      • OK: DST_FileNamingScheme_20151216
      • OK: 20151216_DST_FileNamingScheme
      • AVOID: DST_20151216_FileNamingScheme
  • Use only one period and before the file extension (e.g., name_paper.doc NOT name.paper.doc OR name_paper..doc)

File Version Control: Keeping track of versions of files, or file history, can be challenging but may save you a lot of time if you want to go back to an earlier version of a file. There are different ways to approach this issue:

  • Manually (low tech/no tech approach): Use a sequential numbered system: v01, v02
    • Don’t use confusing labels, such as ‘revision’, ‘final’, ‘final2’, etc.
  • Use version control software
    • If you use a cloud storage system, such as Spideroak, versioning might be built in/automatic
    • Git + Github may provide what you need but may also have a steep learning curve (but there are lots of educational resources, such as this and this, and there are also some GUI interfaces for Git if you’re not used to command-line work),  There are other systems too, such as Mercurial, or TortoiseSVN.

Folder Structure: Consider the hierarchy for how you want to organize your files, whether to use a deep or a shallow organization for them.

Here’s an example from the UK Data Archive :

Example of folder structure from UK Data Archive.

Example of folder structure from UK Data Archive.

Tell Us

How do you name your files? Do you have a system? Is it written down?
Would you change anything about it now, if you could?
What tools do you use to keep your files organized?

Twitter: #LYD16
Instagram: #LYD16
Facebook: #LYD16


See the guidelines on the UO Research Data Management pages
Contact us if you have questions.
Check out the resource board & the changing face of data on Pinterest

Source: materials adapted from LYD website.

Posted in Best practices, File management, Research data management (RDM), Tips & Tricks | Tagged | Leave a comment

Love Your Data (LYD) week – Keep your data safe

This is ‘Love Your Data‘ week, and each day we’ll be sharing a post about one or more fundamental data management practices that you can use. Part 1 of 5. Parts 1, 2, 3, 4, and 5


Follow the 3-2-1 Rule:

  • Keep 3 copies of any important file (1 primary, 2 backup copies)
  • Store files on at least 2 different media types (e.g., 1 copy on an internal hard drive and a second in your department or college’s server, or secure cloud storage)
  • Keep at least 1 copy offsite (i.e., not at your home or in the campus lab — check with your department or college about offsite or secure cloud storage)

If possible, it is highly recommended that you set up an automated system to back up your files. This is true whether you work alone, or as part of a research team. For example, use Syncthing.

Avoid these: 

  • Storing the only copy of your data on your laptop or flash drive
  • Storing critical data on an unencrypted laptop or flash drive
  • Saving copies of your files haphazardly across 3 or 4 places
  • Sharing the password to your laptop or cloud storage account

Today’s activity

Data snapshots or data locks are great for tracking your data from collection through analysis and write up. Librarians call this provenance, and it can be really important.

Errors are inevitable. Data snapshots can save you lots of time when you make a mistake in cleaning or coding your data. Taking periodic snapshots of your data, especially before the next phase begins (collection or processing or analysis) can keep you from losing crucial data and time if you need to make corrections. These snapshots then get archived somewhere safe (not where you store active files) just in case you need them. If something should go wrong, copy the files you need back to your active storage location, keeping the original snapshot in your archival location. For a 5-year longitudinal study, you might take snapshots every quarter. If you will be collecting all the data for your study in a 2-week period, you will want to take snapshots more often, probably every day. How much data can you afford to lose?

Oh, and (almost) always keep the raw data! The only time when you might not is it’s easier and less expensive to recreate the data than keep it around.

Instructions: Draw a quick workflow diagram of the data lifecycle for your project (check out our examples on Instagram and Pinterest). Think about when major data transformations happen in your workflow. Taking a snapshot of your data just before and after the transformation can save you from heartache and confusion if something goes wrong.

Tell us 

Where do you store your data? Why did you choose those platform(s), locations, or devices?

Twitter: #LYD16
Instagram: #LYD16
Facebook: #LYD16


See the guidelines on the UO Research Data Management pages
Contact us if you have questions
Check out the resource board & the changing face of data on Pinterest

Source for this page: LYD website.

Posted in Best practices, Research data management (RDM), Storage and backup, Tips & Tricks | Tagged | Leave a comment

ORCiD: Credit Where Credit is Due

What is ORCID?

The Open Researcher and Contributor ID (ORCID) is a permanent digital identification number associated with a given researcher. This identifier enables you to create an openly accessible profile for yourself that can be associated with and linked to your research activities and outputs, from grants, to articles, datasets, and citations. ORCID is an open, non-profit, community-driven effort to create and maintain a registry of unique researcher identifiers that can be tracked across publishers and institutions.

Benefits of ORCID

You can associate your ORCID number with all of your publications, data sets, grants, and even presentations to ensure that your work is uniquely identified. This alleviates confusion and helps you distinguish your research activities from others. It ensures proper attribution in cases where:

  • You have a common name
  • You have changed your name, or published under slight variations of your name (following a marriage or John Doe vs. John A. Doe, e.g.)
  • You change institutions

A single common identifier makes it easier to find and cite your work in article databases and indexes, and through search engines such as Google Scholar. It also enables the automatic connection between systems.

As more journals and funders begin requesting and using ORCIDs, it can be a huge time saver as it has the option to automatically import your information, such as basic contact, awards and works, to these applications. See this Integration chart to find out which organizations are implementing connections with ORCID.

Some examples of ways to integrate your existing profile with ORCID:

Registering with ORCID

Registering with ORCID is fairly straightforward. Once you register, you can begin to connect other details for your record by linking to other identifiers, publications, grants, etc. You can adjust the privacy settings for the information stored in ORCiD at any time.

Questions about ORCID? Contact: Brian Westra

Graduate Students: Submitting your Thesis or Dissertation?

ProQuest, the site used by the University of Oregon to administer the submission of electronic theses and dissertations (ETDs), has begun tracking ORCID numbers. Associating a universal identifier with your work early in your career will ensure that it is always correctly attributed.

Questions about ORCID and ETDs? Contact: Catherine Flynn-Purvis

Posted in Data citation, Data News | Leave a comment

NCBI and the NIH Public Access Policy

From the HLIB-NW email list:

Recording Available: NCBI and the NIH Public Access Policy

On March 5, NCBI hosted a full-to-capacity webinar outlining the NIH Public Access Policy, NIHMS and PubMed Central (PMC) submissions, creating My NCBI accounts, use of My Bibliography to report compliance to eRA Commons, and using SciENcv to create biosketches.  The slides and Q & A are available on the NCBI FTP site ( The March 5 recording is available on the NCBI You Tube channel.

A live re-broadcast of the webinar will be held on April 21, 2015.

Posted in Data News, Tips & Tricks | Tagged , , , , | Leave a comment

Thoughts on Data and Ethics, and Resources for Psychology

Thoughts on Data Management and Data Ethics

Earlier this year, Brian Westra and I gave a brief presentation on Data Management issues in an annual seminar that the Department of Psychology holds for its first-year graduate students. The seminar’s larger topic was data ethics. One question that came up was how data management and data ethics relate to one another.

This post has two parts:

1) A few points on how data management and ethics relate. It can be useful to think about this topic explicitly because discussions about it can help to guide research and data management decisions.

2) A list of resources on these topics for new graduate students. Some of the links relate specifically to Psychology, but they all apply in principle across disciplines.

Short version: Data ethics is built on data management. Both are more about one’s frame of mind than about any specific tools one chooses to use. Having said that, it’s important to give oneself exposure to ideas and tools around these topics, in order to push one’s own thinking forward. Some useful resources are listed below to help you get started.

Thoughts on Data Management and Ethics

  • Data ethics is built on data management.Ethical questions and potential problems come up based on data management decisions made in the past, down to topics as seemingly trivial as where data are stored (on a passwordless thumb drive? Or in a place where disgruntled employees or Research Assistants can access or abscond with them?), to what data are even kept (it’s hard to re-use something, for good or ill, if you didn’t record it in the first place).

    This doesn’t just apply to ethically problematic topics, though. Things that may look like bad ideas at first (such as the ability of RAs to remove data from the lab) may not be in certain situations, just as ideas that seem good at first may come to seem bad later. The larger point is that ethical questions about both legitimate and illegitimate uses of research data need to be considered and addressed as they come up, and that DM decisions can help one to predict which questions are more likely than others to come up during the data lifecycle.

  • Talking about both data management and ethics is more than talking about tools.Because data ethics questions come up based on data management decisions, it is true that discussions about data ethics sometimes require at least some minimum level of technical understanding. Basic technical knowledge here can help to answer questions beyond whether certain data should be kept or not, such as which ways of storing and offering access to those data would be acceptable. This basic technical knowledge is important for many ethical discussions, because it can help to shape the conversation to more nuanced topics.

    Rather than the take-home message here being that lacking some amount of technical understanding means that one shouldn’t engage in conversations about data ethics, I think that making education on these topics easily accessible (here at the UO, for instance, through our own DM workshops, workshops from the College of Arts and Sciences Scientific Programming office, and resources such as the Digital Scholarship Center) is important and necessary, as is taking advantage of them.

  • Data Management (and, thus, data ethics) is about having a certain frame of mind, even at a superficial level.Data Management often has to do with thinking about decisions up-front rather than in a reactionary way. This frame of mind can also apply to talking about data ethics. Even if some ethical issues haven’t come up yet, having good DM in place can help one to more quickly understand and respond to new issues that do come up.

Resources for New Students (especially in Psychology)

Issues of data management are not going away; indeed, their relevance to individual researchers will likely increase — the White House, for example, recently issued new guidelines requiring Data Management Plans (and encouraging data sharing) for all federal grant-funded research. Below is a list of resources to prompt further thought and discussion among new grad students.

These are listed here with a focus on Psychology; having said that, many of them have relevance beyond the social sciences:

  • A useful summary of tools that are available for graduate students to organize their work (including data), from Kieran Healy, a Sociologist at Duke University.
  • An overview of the new “pre-registration” movement in Psychology: “Pre-registration” is when researchers register their hypotheses and methods before gathering any data. In addition to increasing transparency around research projects, this practice can increase how believable results seem, since it can decrease researchers’ incentives to go “fishing” for results in the data. This practice could also presumably be used to build a culture in which all aspects of a project, from methods to data, are shared.
  • Especially relevant for social scientists, a nice summary of several cases that deal with data management and the de-identification of data:
    • A summary of several cases in which de-identified data were able to be re-identified by other researchers, from the Electronic Privacy Information Center (EPIC)
    • A more nuanced, conceptual reply to (and criticism of) focusing on cases such as those in the summary above from EPIC. A take-home message from these readings is that data can sometimes be re-identified in very creative ways not immediately apparent to researchers. Other sites, such as this page from the American Statistical Association, summarize techniques that can be used in order to share sensitive data. Of special note is that “sensitive data” could, if that information were re-identified, include not only medically-related records, but even answers to survey questions about morality or political affiliations.
  • For students at the UO: Sanjay Srivastava, Professor of Psychology, often includes commentary on data analysis and transparency issues on his blog, The Hardest Science.

Feel free to comment here or email Brian with questions about data management issues. Also take a look at our main website for more resources.

Posted in Tips & Tricks, Workshops & Events | Leave a comment

Annotate, Annotate, Annotate

This post is part of a series on future-proofing your work (part 1, part 2). Today’s topic is annotating your work files.

Short version: Write a script to annotate your data files in your preferred data analysis program (SPSS and R are discussed as examples). This will let you save data in an open, future-proofed format, without losing labels and other extra information. Even just adding comments to your data files or writing up and annotating your analysis steps can help you and others in the future to figure out what you were doing. Making a “codebook” can also be a good way to accomplish this.

Annotating your files as insurance for the future…

Our goal today is simple: Make it easier to figure out what you were doing with your data, whether one week from now, or one month, or at any point down the road. Think of it this way: if you were hit by a bus and had to come back to your work much later, or have someone else take over for you, would your project come to a screeching halt? Would anyone even know where your data files are, what they represented, or how they were created? If you’re anxiously compiling a mental list of things that you would need to do for anyone else to even find your files, let alone interpret them, read on. I’m going to share a few tips for making this easier with a small amounts of effort.

In the examples below, I’ll be using .sav files for SPSS, a statistics program that’s widely used in my home discipline, Psychology. Even if you don’t use SPSS, though, the same principles should hold with any analysis program.

Annotating within Statistics Scripts:

Commenting Code

Following my post on open vs. closed formats, you likely know that data stored in open formats stand a better chance of being usable in the future. When you save a file in an open format, though, sometimes certain types of extra information get lost. The data themselves should be fine, but extra features, such as labels and other “metadata” (information about the data, such as who created the data file, when it was last modified, etc.), sometimes don’t get carried over.

We can get around this while at the same time making things more transparent to future investigators. One way to do this is to save the data in an open format, such as .csv, and then to save a second, auxiliary file, along with the data to carry that additional information.

Here’s an example:


SPSS is a popular data analysis program used in the social sciences and beyond. It also uses a proprietary file format, .sav. Thus, we’ll start here with an SPSS file, and move it into an open format.

SPSS has a data window much like any spreadsheet program. Let’s say that a collaborator has given you some data that look like this:


Example Data

Straightforward so far? The data don’t look too complicated: 34 participants (perhaps for a Psychology study), with five variables recorded about each of them. But what does “Rxn1,” the heading of the second variable in the picture, mean? What about “Rxn2?” Does a Grade of “2” mean that the participant is in 2nd grade in school, or that the participant is second-rate in some sport, or something else?

“Ah, but I’ve thought ahead!” your collaborator says smugly. “Look at the ‘Variable’ view in SPSS!” And so we shall:


Example SPSS Variable View

SPSS has a “variable” view that shows metadata about each variable. We can see a better description of what each variable comprises in the “Label” column — “Rxn1,” we can now understand, is a participant’s reaction time on some activity before taking a drug. Your collaborator has even noted in the label for the opaquely-named “RxnAfterTx” variable how that variable was calculated. In addition, if we were to click on the “Values” cell for the “Grade” variable, we would see that 1=Freshman, 2=Sophomore, and so on.

It would have been hard to guess these things from only the short variable names in the dataset. If we want to save the dataset as a .csv file (which is a more open format), one of the drawbacks is that the information from this second screen will be lost. In order to get around that, we can create a new plain-text file that contains commands for SPSS. This file can be saved alongside the .csv data file, and can carry all of that extra metadata information.

In SPSS, comments can be added to scripts in three ways, following this guide (quoted here):

COMMENT This is a comment and will not be executed.

* This is a comment and will continue to be a comment until the terminating period.

/* This is a comment and will continue to be a comment until the terminating asterisk-slash */

We can use comments to add explanations to our new file as we add commands to it. This will actually allows us to add more than the original SPSS file would have held, since we can now have labels as well as the rationale behind them, in the form of comments.

Now able to write comments, we can write and annotate a data-labeling script for SPSS like this:

/* Here you can make notes on your Analysis Plan, perhaps putting the date and your initials to let others know when you last updated things (JL, March 2014): */

Perhaps here you might want to list your analysis goals, so that it’s clear to future readers:
1 Be able to figure out what I was doing.
2 Get interesting results.

/* An explanation of what the code below is doing for anyone reading in the future: Re-label the variables in the dataset so that they match what’s in the example screenshot from earlier in this post: */

Participant_ID “” /* The quotes here just mean that Participant_ID has a blank label */
Rxn1 “Reaction time before taking drug”
Rxn2 “Reaction time after first dose of drug”
Rxn3 “Reaction time after second dose of drug”
RxnAfterTx “Rxn1 subtracted from average of Rxn2 and Rxn3”
Grade “Participant’s grade in school”
EXECUTE. /* Tell SPSS to run the command */

/* Now let’s add the value labels to the Grade variable: */

Grade /* For the Grade variable, we’ll define what each value (1, 2, 3, or 4) means. */
1 ‘Freshman’
2 ‘Sophomore’
3 ‘Junior’
4 ‘Senior’
EXECUTE. /* Tell SPSS to run the command. */

Now if we import a .csv version of the data file into SPSS and run the script above, SPSS will have all of the information that the .sav version of the file had.

While this is slightly more work than just saving in .sav format, by saving the data and the accompanying script in plain-text formats, we future-proof them for use by other researchers with other software (although researchers running other software won’t be able to directly run the script above, they will have access to your data and will be able to read through your script, allowing them to understand the steps you took). By saving using plain-text formats, we also make it easier to use more powerful tools, such as version control systems (about which I might write a future post).

By using comments in our scripts (whether they’re accompanying specific data files or not), we enable future readers to understand the rationale behind our analytic decisions. Every programming language can be expected to have a way to make comments. SPSS’ is given above. In R, you just need to add # in front of a comment. In MATLAB, either start a comment with % or use %{ and %} to enclose a block of text that you want to comment out. In whatever language you’re using, the documentation on how to add comments will likely be among the easiest to find.

Other Approaches:

Another, complimentary, approach to making data understandable in the future is to create a “codebook.” A codebook is a document that lists every variable in a dataset, as well as every level (“Freshman,” “Sophomore,” etc.) of each variable, and sometimes provides some summary statistics. It gives a printable summary of what every variable represents.

SPSS can generate codebooks automatically, using the CODEBOOK command. R can do the same, using, for example, the memisc package:

?codebook() # Look at the example given in this help file to see how the memisc package allows adding labels to and generating codebooks from R dataframes.

We can also write a codebook manually, perhaps adding it as a block comment to the top of the annotation script. It might start with something like this:


Codebook for this Example Dataset:

Variable Name: Participant_ID
Variable Label (or the wording of the survey question, etc.): Participant ID Number

Variable Name: Rxn1
Variable Label: Reaction time before taking drug
Variable Statistics:
Range: 0.18 to 0.89
Mean: .50
Number of Missing Values: 0



Wrapping Up

The use of annotations can save you and your collaborators time in the future by making things clear in the present. In a way, annotating your files (be they data, or code, or summaries of data analysis steps) is a way to accumulate scientific karma. Use these tips to do a favor for future readers, looking forward to a time in the future when you might be treated to the relief of reading over a well-documented file.

Posted in Tips & Tricks | Leave a comment