Introduction to Discrete Choice Experiments (DCEs)
A discrete choice experiment (DCE) is a research method where participants are asked to choose between a set of alternatives, each described by a set of attributes with varying levels1. By observing these choices, researchers can infer the relative importance of different attributes and the trade-offs people are willing to make. DCEs are widely used in fields like marketing, health economics, transportation, and policy-making to predict consumer preferences and estimate metrics like willingness-to-pay1. In simpler terms, a DCE presents respondents with hypothetical scenarios (choice questions) and analyzes their selections to understand what drives their decisions.
Why are DCEs important? They allow us to estimate the value of product features or policy components without directly asking people (which can introduce bias). Instead, preferences are revealed through choices. For example, a DCE can help a car manufacturer determine how much consumers value an electric vehicle’s range vs. its price by seeing which trade-off consumers choose in a series of comparisons.
Designing a good DCE involves careful planning of the attributes and levels, constructing an efficient set of choice questions (the experimental design), collecting responses, and analyzing the results with appropriate statistical models (typically logistic regression models like conditional logit). This guide will walk you through this entire process using DCEtool, a free R package with a graphical interface that simplifies DCE design, survey creation, and analysis2. We assume minimal prior knowledge, so each step is explained in detail.
Tip: If you need a more advanced, professional solution for large-scale studies, consider ChoiceLab, Cognitur’s online platform for DCEs. ChoiceLab allows you to design DCEs and deploy them to respondents via a web link, collecting and analyzing data seamlessly on the platform. We will use DCEtool for this tutorial, but we’ll highlight how ChoiceLab can make certain steps even easier for professional use.
Step 1: Setting Up Your Environment and Installing DCEtool
Before designing a DCE, you need a working R environment since DCEtool is an R package. If you don’t have R installed, download it from the CRAN website, and we recommend using RStudio as a user-friendly interface for running R code. Once R is ready:
- Install the DCEtool package from CRAN by running the following in your R console:
install.packages("DCEtool")
library(DCEtool)
- DCEtool depends on the Shiny package (for the graphical interface). If you haven’t used Shiny before, install it as well:
install.packages("shiny")
library(shiny)
With these installed, you can launch the DCEtool graphical interface by simply calling DCEtool()
in the R console23. This will open a Shiny app (either in your RStudio Viewer or web browser) that serves as a step-by-step wizard for creating and running a discrete choice experiment. You won’t need to write any further code – the tool provides interactive forms and buttons for each step of the process.
Note: DCEtool is an open-source package created by Daniel Pérez Troncoso (2022)4. It was designed to be intuitive and accessible, allowing users to “design, conduct, and analyze DCEs from a visual interface”4 without deep programming knowledge.
Step 2: Defining Attributes, Levels, and Options (Design Settings)
Once the DCEtool app is running, the first interface you’ll encounter is the “Design settings” tab. This is where you define the fundamental characteristics of your experiment. Essentially, you’ll input what attributes you plan to include in your DCE and how many levels each attribute has. Don’t worry about naming them yet – this step is just to set up the structure of the experiment.
Attributes and Levels: Attributes are the features or factors that vary between choices (for example: Price, Color, Size, Warranty, etc. for a product). Each attribute has a set of possible levels (e.g. Price might have levels $10, $15, $20; Color might have levels Red, Blue, Green; and so on). Think carefully about which attributes are relevant to your research question, as well as realistic levels for each.
In the Design settings form, you will:
- Specify the number of attributes in your experiment (the tool will then expect you to input the number of levels for each attribute). For example, if your DCE will consider Price, Color, and Size, that’s 3 attributes.
- Specify the number of levels for each attribute. After entering the count of attributes, DCEtool will prompt you to enter how many levels each one has. Initially, just enter the number of levels (e.g. 3 levels for Price, 3 for Color, 2 for Size, etc.). You will give descriptive names to these levels later.
Next, click “Save attributes”. The app will then show additional fields to define the structure of the choice experiment:
- Number of alternatives per choice set: How many options will the respondent choose from in each question? Commonly this is 2 or 3 alternatives per choice task (not counting a possible “None” or “opt-out” option). For example, setting 2 alternatives means each choice question will ask the respondent to choose between two hypothetical products/alternatives.
- Number of choice sets (questions) each respondent will answer: How many choice tasks will each respondent see? DCEtool will generate this number of distinct choice scenarios based on your attributes and levels. Keep in mind that there is a minimum number of questions required to estimate all the effects: you need at least L – K choice sets, where L is the total number of levels across all attributes, and K is the number of attributes (if you plan to include an “opt-out” option in each choice, add +1 to this minimum)danielpereztr.github.io. In practice, you’ll likely use more than the bare minimum to improve the statistical reliability, but this rule ensures the design is identified (for example, if you have 5 attributes each with 3 levels, L = 15, K = 5, so at least 10 choice sets are neededdanielpereztr.github.io).
- Opt-out option: Decide if you want to include a “None” or “No choice” alternative in each choice set. This is an additional alternative that represents opting out of making a choice. If you include an opt-out, it counts as an extra alternative and influences the design requirements (+1 to the l-k rule above)danielpereztr.github.io. Opt-outs can be useful in certain studies (e.g., when respondents could genuinely choose nothing), but they also add complexity.
- Bayesian efficient design (optional): DCEtool offers an advanced option to use a Bayesian approach for generating an efficient experimental design2. In simple terms, an efficient design tries to choose the combinations of attribute levels for each question that yield the most information for your model estimates. A Bayesian efficient design incorporates prior beliefs about the values of the attribute effects (priors) to optimize the designdanielpereztr.github.io. If you have prior data or a pilot study, you can input prior coefficient values here. If you don’t (which is common for a first experiment), you can leave the default priors as zerodanielpereztr.github.io, which tells the tool to assume no prior preference direction. Using the default (non-Bayesian) approach is perfectly fine for most first-time designs.
Once you’ve filled in these details, click “Save settings.” This locks in your experimental design parameters. DCEtool now has enough information to generate a draft design for your DCE.

Figure: The Design settings tab in DCEtool. Here, we have input the number of attributes and levels for a hypothetical experiment, as well as specified 2 alternatives per choice set and the number of choice sets each respondent will see. Advanced options like Bayesian design can be enabled if prior information is available (otherwise default values are used)23. Once these settings are saved, the tool will use them to construct the experimental design.
Step 3: Generating the Experimental Design (Design Matrix)
After saving the design settings, proceed to the “Design matrix” tab. This is where DCEtool generates the actual experimental design – essentially, the set of choice questions (and the combination of attribute levels in each alternative for each question) that will be presented to respondents.
- Generate the design: Click the “Generate design” button. DCEtool will compute a matrix that represents your choice sets. Initially, this design matrix will use numeric codes for levels (since we haven’t named the attributes or levels yet). Each row in the design matrix typically corresponds to an alternative within a choice set, and columns correspond to attributes (encoded as dummy variables). For instance, if an attribute has 3 levels, the design matrix might use values 0/1 to indicate the presence of a level, with one level omitted to serve as a baseline (this omission is to avoid multicollinearity in estimation)2. Don’t be alarmed if this looks confusing – the tool will help label everything in the next steps.
- Name the attributes: To make the design human-readable, click “Name the attributes.” The interface will prompt you to enter a descriptive name for each attribute (e.g., rename “Attribute 1” to “Price”, “Attribute 2” to “Color”, etc.). Enter all attribute names and hit save.
- Name the levels for each attribute: After saving attribute names, DCEtool will allow you to input names for each level of each attribute. For example, for the attribute “Color” with 3 levels, you might name the levels “Red,” “Blue,” and “Green.” You will do this one attribute at a time. Once all levels for all attributes are named and saved, a new button typically appears.
- Apply names to design matrix: Click the button (often labeled “Apply names” or similar) that appears after you’ve named everything. This will update the design matrix display so that the columns use your attribute and level names instead of generic codesdanielpereztr.github.io. Now you’ll see each choice scenario in a more understandable format. For instance, you might see a table where each choice set is represented with actual levels like “Price = $10, Color = Red, Size = Large” vs “Price = $15, Color = Blue, Size = Small” for a two-alternative design, etc.
At this point, it’s a good idea to review the design matrix. Look at the combinations of attribute levels being presented. Do they all make sense logically? Efficient designs will sometimes produce combinations that seem odd or unrealistic (because the goal is statistical efficiency, not realism). If you spot a combination that truly doesn’t make sense for respondents, you might consider adjusting attributes/levels or manually tweaking the design later. However, avoid making arbitrary changes, as that can reduce the statistical validity of the design.
- Save the design (optional): DCEtool allows you to export the design matrix to an Excel file for record-keeping or manual editing. You can click “Save design” and choose a location to save the Excel sheet (e.g.,
DCE_design.xlsx
). In that file, you’ll have all the information about attributes, levels, and which combinations form each choice set. If you decide to modify the design in Excel (say, remove or alter a scenario), be very careful – the structure must remain consistent with the parameters you set (same number of attributes, levels, alternatives, etc.) for DCEtool to be able to read it back in2. You can always load a saved design later in DCEtool to resume your work, which is helpful if you close the app or want to share the design with a colleague. - Add attribute and levels names: Finally, the user can click on the “Name the attributes” button to assign a name to each attribute. Then, under “Change level names”, they can enter the name of each level for every attribute one by one. Once all names have been entered, the user can click on “Change names in design matrix” to apply the changes. Afterwards, by clicking on “Decode the design matrix”, they will be able to view the decoded experimental design of their DCE. This design can then be used to create a paper-and-pencil survey.

By the end of this step, you have an experimental design ready – essentially the blueprint of your DCE survey. Next, you’ll create the survey that participants will actually see, using this design.
Step 4: Creating the Survey Questionnaire in DCEtool
With the design in hand, the next step is to prepare the actual survey interface for respondents. Navigate to the “Create a survey” tab in DCEtool. Here, you will set up the content that wraps around the choice questions – things like introduction text, instructions, and the labels that respondents will see for each alternative.
Key tasks in this step:
- Introductory Text: You can write a welcome message or instructions for participants. This is a crucial part of your survey – it should clearly explain the context and what the respondent is asked to do. For example, you might write: “Thank you for participating. In this survey, we will present you with {{number}} questions about choosing a preferred product. For each question, please imagine you are shopping and choose the option you prefer most. There are no right or wrong answers – we are only interested in your preferences.” You can format this text in Markdown (a simple formatting syntax) to add headings, bold, bullet points, etc. DCEtool provides a live preview beside the text box so you can see how your formatted introduction will lookdanielpereztr.github.io. Make sure your introduction is concise and clear.
- Concluding Text: Similarly, you can provide an end message (thank-you note or further instructions after completion). For example, “This concludes the survey. Thank you for your time! Click submit to record your responses.” This text will appear on the final screen of the survey, after the respondent has answered all choice questions2.
- Alternative Labels: Each choice set in a DCE typically presents alternatives side by side (or one above the other). Instead of generically calling them “Option 1” and “Option 2,” you can label them. DCEtool allows you to name each alternative. For instance, you could label alternatives as “Plan A” and “Plan B,” or “Product 1” and “Product 2,” etc., depending on context2. If you included an opt-out, that might be labeled something like “Neither” or “No selection.” Choose labels that are neutral (don’t bias the respondent) and clear.
As you fill in the intro text, end text, and alternative labels, the main panel will update with a live preview of a sample choice question using your designdanielpereztr.github.io. This preview shows exactly how a question will appear to respondents, including the attribute labels, levels for each alternative, and the selection buttons (usually radio buttons) for making a choice.
Take a moment to review the preview. Check that:
- The wording of your intro and conclusion is correct (and formatting is as intended).
- Alternative labels are displaying properly.
- The choice question layout makes sense: for each attribute row, the levels for each alternative are shown under the respective alternative label, and it’s clear how to make a selection (e.g., a circle or button under each alternative column).
- No text is cut off or misaligned (if it is, you might need to shorten some labels or adjust formatting).

Figure: Preview of a DCE question in DCEtool’s Create a survey tab. The example shows two alternatives (“Option A” and “Option B”) with several attributes (e.g., Attribute1, Attribute2) listed in rows. The respondent will choose one alternative per question by selecting the corresponding button. On the left, an introduction has been written in Markdown and is shown in formatted form (not visible in this cropped preview). This live preview allows you to verify that the question text and formatting look correct before launching the survey.
At this stage, you’ve essentially set up your survey instrument. You have the design (which questions to ask) and the content (how the survey is presented to respondents). Now it’s time to actually run the experiment and collect responses.
Step 5: Launching the DCE Survey and Collecting Responses
Once your survey is prepared, switch to the “Survey” tab in DCEtool. This is where you will deploy the survey to respondents (at least in a local setting). DCEtool can launch an interactive survey session right from R – effectively turning your computer into a survey kiosk for as long as the app is running. This is extremely useful for testing your DCE or collecting data from a small group of respondents (e.g., in a controlled lab setting or pilot study).
On the Survey tab, you’ll encounter a few options before launching:
- Serial DCE options: DCEtool provides three modes:
- No serial approach: This is the standard mode – every respondent gets the same set of choice questions (the design you generated). Use this for a typical fixed design DCE2.Pure serial (Bliemer & Rose method): In this advanced mode, the survey adapts after each choice. The tool will re-estimate a quick model after each question and adjust the remaining questions for that respondent to maximize information based on their previous answers2. This is known as an adaptive or sequential DCE and is based on research by Bliemer & Rose (it aims to improve efficiency by personalizing the design in real-time).Five-by-five serial approach: Another adaptive strategy where the design updates after every 5 respondents (rather than after each question)2. This way, batches of respondents get updated designs periodically.
- Launch the survey: When you’re ready, click “Launch the survey.” DCEtool will open the DCE survey in a new window (or a browser tab). This is the actual questionnaire interface that a respondent interacts with2. It will display your introduction text, then the first choice question with the alternatives and attributes, and a “Next” button. Respondents select their preferred option for each question and proceed. After the last choice set, the concluding text is shown along with an option to Submit or Start a new survey.
- Gathering multiple responses: If you are collecting data from several people using the same computer, each person can take the survey one after another. After one respondent finishes and the end screen appears, you can click “Start a new survey” to reset the questionnaire for the next respondent3. All responses will be stored until you choose to analyze or save them. Make sure respondents don’t accidentally close the survey window mid-way; if they do, you might lose that partially completed data.
Important: Only close the survey or move to the results after a respondent has fully completed all questions and reached the final screen. If you stop the survey in the middle (or navigate to the Results tab too early), the data from that session may not be recorded2. Always wait for the final “Thank you” or end message to appear, which signals that the responses have been recorded in the tool’s memory.
DCEtool’s local survey launch is ideal for pilots or classroom demonstrations. However, if you intend to collect a large number of responses or want to reach respondents remotely online, the local approach can become cumbersome (you would need to send the R/Shiny app to respondents or host it on a server, which requires technical setup). This is where an online platform like ChoiceLab shines. With ChoiceLab, you could deploy your DCE as a web survey accessible via a simple link – no R environment needed on the respondent’s side, and no manual data merging for you. All responses would be automatically collected on the platform for analysis. For a professional study with dozens or hundreds of participants in different locations, using an online DCE platform saves a lot of time and reduces the risk of data loss or technical issues.
ChoiceLab vs. DCEtool: In our example, we used DCEtool to manually launch and manage the survey. In contrast, Cognitur’s ChoiceLab platform would handle survey deployment and data collection automatically. You would design the experiment (using a user-friendly web interface), and ChoiceLab would generate a link to send to respondents. Participants could complete the DCE on their own devices, and you could monitor response progress in real-time. The choice between DCEtool and ChoiceLab often comes down to scale and convenience – DCEtool is great for learning and small projects, whereas ChoiceLab is tailored for larger-scale, professional research.
Step 6: Analyzing the Results of Your DCE
After collecting responses (even if it’s just test data from yourself), DCEtool can analyze the results right away. Click on the “Analyze the results” button or navigate to the “Results” tab only after a survey session has completed and is on the final screendanielpereztr.github.io. Upon entering the Results tab, the tool will display the data and provide options for statistical analysis.
Here’s how to proceed with analysis in DCEtool:
- Save the raw results: It’s wise to first save the dataset of responses. The interface should have an option to “Save results” (likely exporting to an Excel file). Do this to have a backup of all the choice data collected. You can reload this data later if needed, or even analyze it outside of DCEtool if you prefer (for example, using R’s
mlogit
orapollo
package for custom analysis). Saving ensures you won’t lose data if the app or R session closes2. - Price variable coding (if applicable): If one of your attributes is a price or cost, DCEtool provides a special utility to compute a continuous price variabledanielpereztr.github.io. This is used for calculating willingness-to-pay (WTP). Essentially, if your design had price as a categorical level (e.g., $10, $15, $20 as discrete levels), the tool can create a continuous representation (like a numeric variable for price) named, by default,
cont_price
3. This step is necessary because WTP calculations typically require treating price as a numeric variable in the model. Use the “Price variable coding” menu to generate this; once done, you’ll see a new column (e.g.,cont_price
) in the data table. - Estimating models: Now the exciting part – estimating the choice model to quantify preferences. DCEtool’s Results interface has an “Estimation” section with a drop-down menu of model optionsdanielpereztr.github.io. The two primary models you might use are:
- Conditional Logit Model (CLM): This is the most common model for DCE data (also known as a multinomial logit when no respondent-specific effects are included). It assumes each attribute’s effect is fixed across all respondents. To run this, simply select the option for a conditional logit. DCEtool will perform the estimation and typically display the coefficients for each attribute level, along with statistical significance (p-values, etc.)2. For example, you might see a coefficient for “Color = Blue” indicating how that level influences utility relative to the omitted baseline level. A positive coefficient means that level increases the probability of an alternative being chosen (relative to baseline); a negative means it decreases the choice probability.
- Mixed Logit Model (MXL): This is an extension that allows preference heterogeneity (i.e., assumes that different respondents might have different tastes, by treating some coefficients as random draws from a distribution). DCEtool can estimate a mixed logit as well2. However, note that the implementation is somewhat experimental; the documentation notes that it can occasionally crash, especially if there are many parameters or certain naming issues3. If you attempt a mixed logit and the app crashes or gives an error, a quick fix is to simplify your variable names (e.g., avoid spaces or special characters) in the results Excel file and reload it3. For most basic analyses, the conditional logit will suffice, unless you specifically need to model preference variation.
- Review model output: After running a model, examine the output. DCEtool should list the estimated coefficients for each attribute level (except the omitted base level for each attribute), and possibly the standard errors or t-statistics. For example, if “Price” was an attribute, you’ll have a coefficient for price (or for each non-baseline price level). If “Price” was coded continuously (as
cont_price
), the coefficient oncont_price
can be interpreted as the disutility of price (typically negative, since higher price lowers choice probability). - Calculate Willingness-to-Pay (WTP): One useful post-analysis calculation is WTP for attribute levels. WTP tells you how much price increase a consumer is willing to trade-off for a change in an attribute. If you have a price variable in the model, WTP for an attribute level can be approximated as – (attribute coefficient) / (price coefficient). DCEtool automates this process. If the last estimated model is a conditional logit (ensure you re-run a CLM if you had run a mixed logit last)2, you can use the “WTP” calculation feature. The tool will output the WTP for each attribute level (usually relative to the base level). For instance, if “Red” is the base color and “Blue” has a coefficient of 0.5, and the price coefficient is -0.1, then WTP for “Blue” might be 0.5/0.1 = $5 (meaning consumers would pay an extra $5 for Blue over Red, on average).
- Visualizing results: DCEtool can also create a graphical representation of the results2. Typically, this might be a bar chart of the attribute level coefficients with error bars, or a similar visualization to help interpret the findings. If you used the WTP feature, it might also plot the WTP values. Look for a button or option to “Plot coefficients” or “Show graph”. This is a convenient way to communicate which attributes had the biggest impact. For example, you might see that the bar for “High Quality” is much taller than for “Brand Name,” indicating quality had a larger effect on choice probability than brand.

Figure: Example output from DCEtool’s Results tab – a graphical summary of model estimates. In this illustrative graph, each bar represents a coefficient for an attribute level from the conditional logit model, and the error bars denote confidence intervals. Such visualizations help in quickly identifying which attributes significantly influence choices. DCEtool also computes the willingness-to-pay for each attribute level (if a price attribute is included), providing results like “Consumers are willing to pay $X more for level Y of an attribute,” which are invaluable for decision-making.
Congratulations – you have designed, executed, and analyzed a discrete choice experiment using DCEtool! The results you obtained (coefficients, significance, WTP, etc.) can now be interpreted in the context of your research question. For instance, you might conclude something like: “Price was found to have a significant negative effect on choice (as expected). Consumers showed a strong preference for the attribute ‘Battery Life: 10 hours’ over ‘Battery Life: 5 hours’, with an estimated WTP of $20 for the longer battery life.” These insights can inform product design, policy decisions, marketing strategies, and more.
Conclusion: From DCEtool to ChoiceLab – Scaling Up Your DCEs
In this guide, we walked through a complete DCE workflow using DCEtool in R – from setting up the design, creating a survey, all the way to analyzing the results. DCEtool proved to be a powerful free tool that makes discrete choice experiments accessible even to those with minimal coding experience, thanks to its Shiny-based graphical interface. It’s ideal for learning about DCEs, conducting pilot studies, or even small-scale projects where you can collect data locally. By using DCEtool, you’ve gained hands-on experience with the mechanics of DCE design (deciding attributes/levels, generating an efficient design), survey administration, and model estimation.
However, as your needs grow, you might require a more robust solution for deploying surveys to a wider audience or collaborating in a team. This is where ChoiceLab – Cognitur’s professional DCE platform – comes into play. ChoiceLab takes the principles you learned here and streamlines them further for real-world applications:
- Web-based Deployment: With ChoiceLab, your DCE survey can be taken by respondents anywhere, anytime via a simple URL. No need for respondents to have any special software – they just use their web browser. This significantly lowers the barrier to collect data from larger or geographically dispersed samples.
- User-Friendly Design Interface: Designing a DCE on ChoiceLab is done through an online interface (no coding in R required at all). This can save time and allows non-technical team members to contribute to experiment design. The interface handles attributes, levels, and design generation behind the scenes, likely using similar efficient design principles.
- Automated Data Collection and Analysis: All responses are automatically stored in the platform’s database. You won’t risk losing data due to an accidental app closure, and you won’t need to manually merge or input anything. Moreover, ChoiceLab provides built-in analysis tools (much like DCEtool’s results tab, but possibly with more advanced options or smoother performance for large datasets). You can run models and see results on the platform directly, and perhaps export datasets for further analysis if needed.
- Professional Support and Collaboration: As a commercial service, ChoiceLab may offer support, tutorials, and the ability to have multiple users work on the same project, which is valuable in a corporate or academic research setting.
In summary, for learning and quick projects, DCEtool is a fantastic starting point, giving you full control and insight into the DCE process. For professional, scalable studies, Cognitur’s ChoiceLab provides a production-ready environment that can save you time and ensure reliability. Many researchers start with tools like DCEtool to prototype an experiment, and once they are confident in the design, they move to a platform like ChoiceLab to deploy it to hundreds of respondents with ease.
We hope this step-by-step guide has demystified the process of designing and running a discrete choice experiment. You’ve seen that with the right tools, what might seem like a complex task (conjoint analysis, experimental design, logistic regression modeling) can be made accessible. Now you can confidently apply DCEs to explore preferences in your own field – whether it’s understanding customer product choices, patient treatment preferences, or any scenario where trade-offs are at play.
Next steps: Take your DCE results and share them with stakeholders – the insights are often very actionable. If you found yourself limited by the logistics of using R locally, consider reaching out to Cognitur about ChoiceLab for a demo or trial, to see how it can facilitate your future projects. Happy experimenting!